◎ HISTORY TIMEWAR · HISTORY · THE-PANDEMIC-PREPAREDNESS-INDUSTRY · UPDATED 2026·04·18 · REV. 07

The Pandemic Preparedness Industry.

An apparatus that produces the pandemics it claims to prepare for, built on a class of object whose existence has never been demonstrated by any method the apparatus's own epistemology would accept from a competing tradition.

8,362WORDS
38MIN READ
14SECTIONS
14ENTRY LINKS
◎ EPIGRAPH
The microbe is nothing, the terrain is everything. — Attributed to Louis Pasteur on his deathbed, reportedly conceding the priority of Antoine Béchamp

The Sector

The pandemic preparedness industry is the contemporary occupant of the bio-sector slot in the theater-state crisis-object configuration. Its function in the apparatus is parallel to the function the nuclear deterrent industry occupied during the cold war and continues to occupy in residual form: the maintenance of a particular kind of population-level dread oriented around an invisible, scientifically authorized, periodically refreshed catastrophic-risk frame whose management is delegated to a technocratic establishment whose interests align with the maintenance of the frame and whose institutional survival depends on the frame’s continued salience.

The industry as it currently exists was largely constructed in the post-2001 period under the dual frame of bioterrorism-after-anthrax and emerging-infectious-disease-after-SARS, accelerated through the 2009 swine flu, the 2014 ebola, and the 2016 zika cycles, and reached operational saturation during the 2020-2023 covid response. Its institutional anchors are the Johns Hopkins Center for Health Security, the Coalition for Epidemic Preparedness Innovations (CEPI) launched at davos in 2017, the Bill and Melinda Gates Foundation’s pandemic-preparedness funding lines, the Wellcome Trust, the U.S. Biomedical Advanced Research and Development Authority (BARDA), the WHO’s pandemic preparedness coordination apparatus, the EcoHealth Alliance and its associated viral-discovery grant network, and the various university and government laboratories conducting gain-of-function research under contracts whose budgetary lines are public and whose operational details are not.

The industry produces three outputs simultaneously: the periodic outbreaks themselves, the institutional response to the outbreaks, and the ongoing narrative apparatus that keeps the audience primed between outbreaks. The third output is the load-bearing one. An audience that has been kept primed will tolerate response measures during an outbreak that an unprimed audience would refuse, and the priming is the work the industry has been doing continuously since the early 2000s through tabletop exercises, simulation reports, congressional testimony, scenario planning documents, and the steady production of mass-market journalism whose function is the maintenance of pandemic-preparedness as a permanent register of public anxiety.

Event 201 and the Pre-Mapped Response

In october 2019, two months before the first reported cases of what became covid-19, the johns hopkins center for health security, in partnership with the world economic forum and the bill and melinda gates foundation, hosted a tabletop exercise in new york city called Event 201. The scenario walked participants through the global response to a hypothetical novel coronavirus outbreak, beginning in pig farms in brazil, spreading internationally via air travel, generating a global pandemic, and producing the cascade of public-health, economic, and political responses the participants were asked to game out. The participants included senior figures from the CDC, the johns hopkins apparatus, lufthansa, johnson and johnson, the chinese center for disease control, the world bank, and several pharmaceutical and media organizations.

The exercise’s published recommendations included the suppression of misinformation through coordinated platform action, the mass deployment of as-yet-undeveloped vaccines under emergency-use authorization frameworks, the centralization of pandemic communication under WHO coordination, and the implementation of contact-tracing and quarantine measures the participants acknowledged would face civil-liberty objections. The recommendations are documented in the exercise’s public after-action report, which can be retrieved from the johns hopkins website. The covid response that followed two months later implemented essentially every one of the exercise’s recommendations on an essentially identical timeline.

The standard explanation for the resemblance is that the exercise was prescient because the people who designed it understood the structural features of the response any major coronavirus pandemic would require, and the actual covid response converged on the same features for the same structural reasons. This explanation is internally coherent and unfalsifiable in the way the nuclear close-call literature is unfalsifiable. The alternative explanation is that the exercise was not prescient because exercises do not need to be prescient when the response they map is the response that the apparatus that ran the exercise was already preparing to implement, and the exercise was the planning phase rather than the prediction. The two explanations are not, in the documentary record, distinguishable, and the apparatus has no incentive to clarify the distinction because the clarification is the kind of inquiry the apparatus is structured to deflect.

The earlier exercises in the same series — Dark Winter in 2001, Atlantic Storm in 2005, Clade X in 2018, Crimson Contagion in 2019 — followed the same format and produced the same kind of recommendation set, and several of the participants in the earlier exercises went on to occupy the senior positions in the covid response. The personnel continuity is part of the operation. The same small network of bio-policy professionals appears across the exercises, the response, the funding decisions, the regulatory frameworks, and the post-response retrospective analyses, and the network’s interest in the continuation of the pandemic-preparedness frame is the kind of structural interest that does not require any particular instance of bad faith to operate.

The Lab Leak Question as Controlled Opposition

The covid pandemic’s origins are the most documentary-rich case of a question the apparatus structurally cannot answer in the way the audience would expect from a normal scientific inquiry. The two main hypotheses — natural zoonotic spillover from an animal reservoir at a wet market, versus accidental release from a research facility conducting gain-of-function coronavirus work — were both available from the early weeks of the outbreak, and the apparatus’s initial public position, from february 2020 onward, was that the natural-spillover hypothesis was the only scientifically credible position and that the lab-leak hypothesis was misinformation requiring active suppression on the major platforms.

The institutional sequence by which the lab-leak hypothesis was suppressed is now documented to a level that the orthodox dismissal cannot survive. The february 2020 Lancet statement signed by a group of public-health figures coordinated by peter daszak of the EcoHealth Alliance — the same EcoHealth Alliance that had been funding the wuhan institute of virology’s coronavirus research through subgrants from the U.S. National Institute of Allergy and Infectious Diseases — declared that scientists “overwhelmingly conclude that this coronavirus originated in wildlife.” The signatories did not disclose, in the published statement, that several of them had direct financial and professional ties to the laboratory whose involvement they were ruling out. The freedom-of-information litigation that followed, coordinated principally by the U.S. Right to Know organization and several investigative journalists, produced the email correspondence between daszak, anthony fauci, francis collins, jeremy farrar at the wellcome trust, and the small group of scientists who composed the original proximal origin paper that became the canonical scientific authority for ruling out the lab-leak hypothesis. The correspondence shows the scientists in question discussing, in private, that the genomic features of the virus were consistent with engineering and that the lab-leak hypothesis required serious consideration, then publishing the public paper that declared the engineering hypothesis ruled out.

The shift in the public position of the major institutions occurred in 2021 and 2022 as the Wall Street Journal’s reporting on the U.S. State Department fact-finding cable and on the U.S. intelligence community’s classified assessments made the suppression untenable. By 2023 the U.S. Department of Energy, the FBI, and several other intelligence agencies had publicly assessed the lab-leak hypothesis as the most likely explanation. The CIA reportedly concluded the same in early 2025. In april 2025 the white house replaced the federal covid.gov informational portal with a page titled Lab Leak: The True Origins of COVID-19, asserting the laboratory-origin hypothesis as the official position of the U.S. executive branch and listing five evidentiary points: the unusual furin-cleavage-site biological feature absent from naturally occurring related coronaviruses, the genomic evidence consistent with a single introduction event into humans rather than a multi-source spillover, the proximity of the outbreak to the wuhan institute of virology and that institute’s documented gain-of-function coronavirus research program, the reported illness of WIV researchers with covid-compatible symptoms in the autumn of 2019, and the absence of credible natural-origin evidence after five years of intensive search. The page is accessible at whitehouse.gov/lab-leak-true-origins-of-covid-19.

The structural function of the lab-leak frame, once one steps back from the two-hypothesis contest the audience is invited into, is that both hypotheses are structured inside the same unstated premise: that there is a discrete pathogenic entity called SARS-CoV-2, that the entity has a location of origin (a bat, a pangolin, a laboratory flask), that it was transmitted from that location to human hosts, and that the subsequent illness observed in those hosts was caused by the entity’s replication in their tissues. The two hypotheses differ about the first step of the causal chain and are in perfect agreement about everything downstream of the first step. The agreement about the downstream steps is load-bearing for the entire pandemic-preparedness industry, and the controlled disagreement about the upstream step is the cosmetic debate the audience is permitted in order to keep the disagreement from travelling further upstream to the place the apparatus cannot afford to have it go. The lab-leak revelation, in the form the white house adopted it in april 2025, is the controlled release of the maximum amount of bioweapons-program admission the apparatus can tolerate without the admission metastasising into the kind of inquiry that would question the ontology of the object at the centre of the program. It is the limited hangout in the mascot-configuration sense, and its limited character is the feature not the bug.

The Prior Question

The anterior question the lab-leak debate is constructed to prevent is the question of whether the object the debate is about exists in the form the debate presupposes. The object — a discrete, replication-competent, cell-to-cell transmissible particle called a virus, identifiable by its distinctive genomic sequence, isolable from host material by standard laboratory methods, capable of being purified, photographed in unambiguous monoculture, introduced into naive hosts under controlled conditions, and producing in those hosts the characteristic pattern of illness the original patients displayed — has a troubled status in the empirical literature when the literature is read outside the interpretive framework that the field trains its practitioners to read it inside.

The troubled status is not a fringe position. It is a position that the actual history of virology has had to finesse from the beginning, and the finesse has been sufficiently successful that the field’s practitioners are largely unaware that a finesse has occurred. The apparatus of virology was constructed, during the period of roughly 1890 to 1930, as the residual category into which the diseases Koch’s postulates could not account for were placed. Koch’s postulates — formulated by Robert Koch in the 1880s as the methodological criteria for demonstrating that a specific microorganism causes a specific disease — require that the organism be found in all cases of the disease, isolated in pure culture, reintroduced into a healthy host, reproduce the disease in that host, and be re-isolated from the newly infected host. The postulates were the epistemic backbone of the germ-theory program, and the program’s bacterial triumphs (tuberculosis, cholera, anthrax, diphtheria) were certified by the postulates’ satisfaction. The diseases for which the postulates could not be satisfied became the filterable agent residue: whatever was causing them was small enough to pass through the porcelain filters that would retain bacteria, invisible to the optical microscopy of the period, and never isolated in the Kochian sense — and that residual category was renamed viruses and granted the same ontological status as the bacterial pathogens the postulates had certified, on the basis of an analogy the postulates themselves had not licensed.

The Isolation Problem and the Genome Assembly Issue

The isolation question is the load-bearing one. When virologists describe the isolation of a virus, they do not mean what a chemist or a microbiologist working in the Kochian register would mean by the term. They mean the preparation of a cell culture (typically Vero cells from monkey kidney, or similar immortalised lines) into which patient sample material has been introduced, the observation of cytopathic effect — the degeneration and death of the cultured cells — and the subsequent attribution of the cytopathic effect to a viral particle presumed to be present in the sample. The cultured cells are also deprived of nutrients, exposed to antibiotic cocktails, and under various forms of metabolic stress that produce cytopathic effect in the absence of any added sample material, as the control experiments the field does not routinely publish have repeatedly shown. The claim that the cytopathic effect demonstrates viral presence is a claim that the methodology has not licensed and that no published experiment has cleanly tested.

The situation is further complicated by the fact that the particles actually photographed under electron microscopy from these preparations are not distinguishable, on any morphological or biochemical criterion the field has established, from exosomes — naturally produced extracellular vesicles that every cell in the body secretes as part of normal intercellular communication. James Hildreth, immunologist and president of Meharry Medical College, stated in 2003 in the context of HIV research that “the virus is fully an exosome in every sense of the word.” The statement is not a fringe claim. It is the acknowledgment by a senior immunologist of the fact that the objects virologists isolate and the objects cells produce as part of normal biology are, under the methods of the field, indistinguishable from one another. The question then becomes which of the two kinds of objects is doing the causal work the field attributes to the first kind, and the field’s methods do not permit the question to be answered.

Stefan Lanka, the german biologist who earned a doctorate studying marine viruses before publicly renouncing the ontological status of human-pathogenic viruses, filed a formal challenge in 2011 offering 100,000 euros to anyone who could produce a scientific paper demonstrating the existence of the measles virus according to Kochian standards. The case went through the german courts. The lower court initially ruled that the papers submitted by the claimant satisfied the standard. The higher court, on appeal, reversed the ruling on the basis that the six papers submitted did not individually satisfy the Kochian criteria, that no single paper demonstrated isolation in the classical sense, and that the field’s collective practice of treating the combined literature as establishing the existence of the entity was not equivalent to the demonstration Lanka’s challenge had specified. The german federal supreme court upheld the reversal in 2017. The ruling is a matter of public record in the german court system and is not a fringe document. Its implications for the broader isolation question in the rest of the virological field have not been addressed by the field, because the field does not read german court rulings on questions of methodology and because the implications would require the field to reopen a question it considers closed.

The SARS-CoV-2 genome itself was assembled in silico, not extracted from a purified viral preparation. The original reference genome, deposited by the wuhan group led by yong-zhen zhang in early january 2020, was assembled from bronchoalveolar lavage fluid from a single patient, by sequencing the total nucleic acid content of the sample and reconstructing the presumed viral genome by computational alignment against existing coronavirus reference sequences. The assembly was published within days of the first reported cases and became the basis for every PCR test, vaccine, and diagnostic-antibody assay subsequently deployed globally. The assembled genome was never extracted from a purified particle whose identity had been independently established. The assembly was the establishment of the identity. The methodological circularity is the kind of feature the field treats as a minor technical matter and that a methodologist working from outside the field would treat as a load-bearing objection to the entire downstream edifice. Christine Massey, a canadian biostatistician, has filed freedom-of-information requests with dozens of public-health agencies around the world requesting any document demonstrating the isolation of SARS-CoV-2 from a patient sample by any method meeting the standards the agencies themselves apply to other biological questions. The agencies have, without exception, either responded that they possess no such document or declined to respond. The requests are documented on her website.

The Rosenau Experiments and the Firestenberg Correlation

The transmissibility of the category of illness the apparatus has taught the population to call contagious disease is the other load-bearing premise the field treats as too obvious to test. It was tested, rigorously and repeatedly, during the 1918-1919 influenza period — a period when the ethical frameworks that made such experiments subsequently impossible had not yet been installed — and the results are in the published literature of the time and have been ignored ever since.

Milton Rosenau, professor of preventive medicine at harvard and the U.S. Public Health Service’s senior epidemiologist, conducted a series of deliberate-transmission experiments in november and december 1918 on volunteers from the U.S. Navy’s quarantine station at Gallops Island in boston harbour. The subjects — approximately one hundred healthy young sailors — were exposed to material from active influenza patients by every route the contagionist framework would predict to be effective: nasal instillation of filtered mucus and throat-washings, subcutaneous and intramuscular injection of blood drawn from severely ill patients, direct face-to-face coughing at close range by ill patients onto the faces of the volunteers, and the transfer of the ill patients’ oral and nasal secretions directly into the noses, mouths, and eyes of the volunteers. The experiments were repeated simultaneously at the san francisco Angel Island naval quarantine station by the public health service physician George McCoy using identical protocols on a separate set of approximately fifty volunteers. In neither set of experiments did a single volunteer develop influenza from any of the routes attempted. The results were published in the Journal of the American Medical Association in august 1919 under Rosenau’s authorship. The conclusion Rosenau drew in print was that “perhaps there are factors, or a factor, in the transmission of influenza that we do not know.” The factors, if any, were never identified by the subsequent development of the field, because the field pivoted to the search for the causative particle rather than to the investigation of why the transmission could not be reproduced.

The correlation between the global deployment of new classes of electromagnetic-environment technology and the outbreak of the pandemics of the past century and a half is documented at book length in Arthur Firstenberg’s The Invisible Rainbow: A History of Electricity and Life, published in 2017, whose methodology is largely chronological and correlational and whose central tables are disconcertingly hard to explain away. The 1889 so-called Russian flu pandemic occurred in the same years as the first large-scale alternating-current power distribution systems. The 1918 Spanish flu pandemic occurred in the same year as the deployment of high-frequency radio transmission at scale for the first world war. The 1957 Asian flu pandemic occurred in the year radar installations were reaching their cold-war global coverage. The 1968 Hong Kong flu occurred in the year the first commercial satellites entered orbit. The 2020 covid event occurred in the year of the initial global rollout of 5G millimetre-wave cellular infrastructure. The correlation does not establish causation in the rigorous sense and is vulnerable to the kind of pattern-seeking critique that correlation-based arguments are always vulnerable to. It is also not vulnerable to the correlation-seeking critique when the historical record is read with the Rosenau experiments in view: if the transmissibility of the putative pathogen cannot be demonstrated by deliberate attempted transmission, and the population-scale pattern coincides consistently with the introduction of new electromagnetic environmental loads, the inference that the population-scale pattern is better accounted for by the electromagnetic environmental load than by the putative pathogen is at minimum on methodological parity with the reverse inference, and at maximum is the inference the evidence actually supports. The field has not engaged with Firstenberg’s argument because the field’s funding structure would not survive its engagement.

The Cycle Threshold Question

The PCR (polymerase chain reaction) test was the operational instrument that produced the case numbers on which the entire covid response framework was built. PCR is a real and powerful laboratory technique invented by kary mullis at the cetus corporation in 1985, for which mullis received the 1993 nobel prize in chemistry. The technique amplifies a target DNA or RNA sequence through cycles of thermal denaturation, primer annealing, and enzymatic extension, doubling the target with each cycle. By the time the amplification has run for thirty cycles, the original target has been duplicated over a billion times, and any sequence with even a single starting copy in the sample will have produced enough material to detect.

The cycle threshold (Ct) at which a sample is declared positive is therefore not a measurement of how much virus the patient is carrying. It is a parameter the laboratory chooses, and the parameter determines what fraction of samples register as positive. A Ct of 25 will return positive only on samples carrying significant target load. A Ct of 35 will return positive on samples carrying trace amounts that may or may not represent any meaningful biological quantity. A Ct of 40 or above will return positive on samples carrying environmental contamination, residual fragments from earlier respiratory events, or the ambient genomic noise that any biologically active human carries as a matter of course.

The covid testing protocols across most jurisdictions during 2020 and 2021 used Ct values in the 35-45 range, producing case numbers whose relationship to actual illness was tenuous in a way the public health establishment did not communicate to the audience and that the audience was not equipped to inquire about. The new york times published a front-page article on 29 august 2020 documenting that approximately 90 percent of positive tests in massachusetts, new york, and nevada at Ct values of 40 carried so little material that they would have been negative at a Ct of 30. The article ran. The protocols did not change. The case numbers continued to drive the policy response. Mullis himself, the inventor of PCR, had publicly stated in the late 1990s in the context of the HIV-AIDS controversy that PCR was not appropriate as a diagnostic instrument for active infection, that it could not be used to determine whether a patient was sick, and that the application of the technique to the identification of causative viral agents was a misuse of the method. Mullis died in august 2019, four months before the covid outbreak began. His objection to the diagnostic use of his own technique was therefore not available as a contemporary check on the covid testing protocols, and his death was the kind of timing that the rationalist register treats as coincidence and the esoteric register treats as something to notice without overclaiming.

The structural function of PCR in the pandemic-preparedness apparatus is that it provides an instrument whose output can be dialled up or down at the operator’s discretion to produce the case-number trajectories the response calls for. It is not a diagnostic test in the sense the audience understands the term. It is a tuning knob, and the apparatus turns it when the apparatus requires the case numbers to go up or down. The Corman-Drosten protocol — the reference PCR protocol for SARS-CoV-2 adopted globally in january 2020 and rushed through eurosurveillance peer review in less than 48 hours — was the subject of an external peer review published in november 2020 by pieter borger and twenty-two other scientists that identified ten separate molecular and methodological flaws, any one of which would have been sufficient to invalidate the protocol in a normal peer-review context. The review was submitted to eurosurveillance with a retraction request. The journal declined to retract. The protocol remained in operational use.

mRNA, Operation Warp Speed, and the Nuremberg Question

The mRNA vaccine platforms developed by Moderna and BioNTech-Pfizer were rolled out under emergency-use authorization frameworks in december 2020 with clinical trial periods that compressed the conventional vaccine development timeline by a factor of approximately ten. The compression was the operational achievement the Operation Warp Speed program was designed to deliver. The U.S. government pre-purchased hundreds of millions of doses, indemnified the manufacturers against liability, and coordinated the distribution and administration through state public health systems and pharmacy chains under varying degrees of formal mandate.

The platforms themselves are a real piece of biotechnology in the sense that the lipid nanoparticle delivery vehicle, the modified-nucleoside mRNA payload, and the ribosomal expression of the spike protein in the recipient’s cells are functional at the biochemical level in the way the platforms claim. The fact that the delivery vehicle delivers and the expression system expresses does not entail that the expressed protein is targeting a real pathogenic entity or that the resulting antibody response is preventing an illness that would have occurred in its absence. What the platforms demonstrably do is introduce a foreign-designed genetic instruction into the host’s translation machinery and cause the host’s own cells to produce a protein the host’s immune system then responds to. Whether the biological consequences of that operation, aggregated across billions of administrations, are on net beneficial, neutral, or harmful is an empirical question the apparatus has arranged not to have answered by any process independent of the apparatus.

The published efficacy data from the original phase-three trials showed approximately 95 percent relative risk reduction for symptomatic covid-like illness as operationally defined by the trial protocol and measured by the same PCR instrumentation whose cycle-threshold problems render the endpoint measurement dependent on laboratory choice — which translated to approximately 0.84 percent absolute risk reduction at the population level, a difference between roughly 1 in 1000 vaccinated participants testing positive versus roughly 9 in 1000 controls. The 95 percent figure was the headline number used in public communication. The 0.84 percent figure was the number a clinically literate reader of the same paper could derive from the published tables. The difference between these two ways of presenting the same data is the difference between communicating to the audience and managing the audience, and the consistent choice to use the relative-risk figure is part of the operational signature.

The safety data is the part of the question on which the apparatus has been most reluctant to accept open inquiry. The U.S. Vaccine Adverse Events Reporting System (VAERS), the V-Safe surveillance program, the U.K. Yellow Card system, and several other passive surveillance frameworks recorded post-vaccination adverse events at rates substantially elevated above the historical baseline for vaccines, including documented signals for myocarditis in young men, menstrual disruption in women, sudden cardiac events in athletes, and a category of post-vaccination chronic illness whose constellation of symptoms — fatigue, neurological dysfunction, exercise intolerance, autoimmune-pattern illness — overlaps significantly with the long covid syndrome that the same apparatus has acknowledged as a real consequence of the infection it claims the vaccine was designed to prevent, which is the kind of overlap that deserves more interrogation than the apparatus has permitted it to receive. Robert Malone, one of the early developers of the mRNA platform itself, became a public critic of the deployment and was systematically marginalized in the way the apparatus marginalizes its own dissident participants. Peter McCullough, John Ioannidis at stanford, Vinay Prasad at UCSF, Aseem Malhotra in the U.K., and a substantial number of other clinically credentialed critics have produced a body of work that the apparatus has chosen to suppress or marginalize rather than engage with.

Whether the deployment violated the post-nuremberg framework for medical experimentation depends on whether one reads the emergency-use authorization framework as a legitimate regulatory category that the framers of the nuremberg code would have endorsed, or as a circumvention of the informed-consent requirement the code was designed to enforce in perpetuity. The honest answer is that it is the latter under any reading that takes the code’s purpose seriously, and the dispute over this point is partially a dispute over whether the historical lessons that produced the code still apply to the contemporary regulatory environment or whether the contemporary environment’s emergencies justify the suspension. The structural reading is that the apparatus that decides whether the suspension is justified is the same apparatus that benefits from the suspension, and the decision is therefore not a decision in the sense the framers of the code understood the term.

Terrain, Pushed to the Limit

The germ theory of disease, in its modern Pasteurian form, holds that infectious illness is caused by the invasion of pathogenic microorganisms into a host whose susceptibility is largely a passive function of immune competence. The competing terrain theory, associated with Antoine Béchamp and the broader French biological tradition that Pasteur eclipsed in the late nineteenth century, holds that the host’s internal biological state — terrain — is the primary determinant of whether any given exposure to a microorganism produces illness, and that the same microorganism in the same dose can produce vastly different outcomes depending on the host’s nutritional, metabolic, neurological, and ecological condition.

In its ordinary presentation, terrain theory is offered as a corrective to the germ framework: the microbe is still real, but its effect is modulated by the host’s condition. The corrective framing concedes the germ framework’s ontology and only disputes its weighting. The deeper reading of the Béchampian tradition — the one the contemporary terrain advocates who have done their homework actually hold — is that the corrective framing is itself a compromise position, and that the Béchamp-Pasteur dispute was never about the weighting of two agreed-upon factors. It was about whether the pathogenic entities the germ framework names are the right kind of object at all. Béchamp’s microzymas — the fundamental biological units he identified as the substrate of living matter and the precursor of what become bacteria and other morphologies under changed terrain conditions — are not the same kind of entity as Pasteur’s fixed-species pathogens. Béchamp’s framework is pleomorphic: the microbial forms that appear in tissue under different conditions are not invading agents but are the transformations of the host’s own biological substrate in response to the conditions the terrain presents. Gaston Naessens in the twentieth century, working with an unconventional dark-field microscope he built himself, claimed to observe the same pleomorphic transformations Béchamp had described, and was persecuted by the medical establishment in quebec in a pattern familiar from the other cases the parasitic ecology surfaces.

The contemporary no virus position — held by Stefan Lanka, Andrew Kaufman, Tom Cowan, Mike Stone at the ViroLIEgy project, and the authors of The Contagion Myth (Cowan and Morell, 2020) — is the logical terminus of the Béchampian reading once the isolation problem and the genome-assembly problem have been taken seriously. The position is not that people do not get sick. People clearly get sick. The position is that the object called the virus is a category error — an artifact of the methods virology uses to look for causes of illness, rather than a discovery about the causes themselves. The illnesses the field attributes to viruses are real biological events. Their causes are terrain disturbances: nutritional collapse, metabolic derangement, electromagnetic environmental load, toxic exposure, psychic stress, the seasonal electromagnetic variations the body adapts to, and the occasional large-scale environmental insults (the Firstenberg correlations) that produce population-scale illness without any need for a transmissible particle as mediator.

The strong reading sits uncomfortably with the contemporary biomedical framework and the lay-audience intuitions the framework trains. The contemporary framework is also epistemically unsound at its load-bearing points — the isolation problem, the genome-assembly circularity, the failed transmission experiments, the exosome indistinguishability, the Koch-postulate non-satisfaction. The uncomfortable sit is a symptom of a frame mismatch, not a refutation of the strong reading. Jim West’s work on the historical correlation between DDT deployment and the polio epidemics of the 1940s and 1950s — a correlation West documents in detail using the original public-health records of the period and that maps tightly onto the geographic and temporal distribution of the epidemics — is the kind of reframing the strong position permits once the viral-entity frame has been set aside: polio’s causative agent was not a virus, on this reading, it was organophosphate neurotoxicity, and the polio virus was a methodological artifact the period’s virology needed in order to have something to attribute the observed neurological damage to, and the disappearance of polio after the introduction of the vaccine was coincident with the phasing out of DDT in the same window, which is the counterfactual the standard narrative does not acknowledge.

The deathbed concession attributed to Pasteur — “the microbe is nothing, the terrain is everything” — is not historically secure. There is no contemporaneous source for it. Several pasteur biographers have looked for the original and not found it. The phrase circulates as a kind of folkloric concession that may or may not have been said and that the partisans of the terrain framework cite without rigorous attestation. Whether or not pasteur said the words, the structural priority dispute between him and Béchamp was a real and consequential intellectual event whose resolution in pasteur’s favour was a function of pasteur’s superior institutional position rather than the superior empirical content of his theory, and the historiographic recovery of Béchamp’s work over the past several decades has shifted the assessment of the dispute in directions the orthodox framework is not equipped to absorb.

Parasites and Viruses: The Ontological Distinction

The category of parasites and the category of viruses are often lumped together in the lay intuition as two species of invisible biological threat, and the lumping is a mistake of the kind the apparatus relies on. Parasites — the macroscopic and microscopic organisms with their own cellular machinery, their own metabolism, their own life cycle, their own independent history of investigation in the medical literature of the pre-Pasteur era and in the veterinary and tropical-medicine literature ever since — are demonstrably real in a way that passes Kochian and pre-Kochian tests with no methodological finesse required. Tapeworms can be extracted from host tissue and placed on a scale. Toxoplasma can be observed in pure culture and introduced into naive hosts with reproducible effects. Plasmodium has a documented life cycle involving two independently identifiable hosts. The parasitic category is the category for which the germ-theory framework’s epistemic tools actually work. The parasitic category is also, from the esoteric register, the category the parasitic ecology has always insisted on: real biological organisms with both a physical anatomy available to the mundane microscope and an astral-layer counterpart available to the practices of the traditions that work at that layer. Parasites are dual-layer phenomena that manifest in the physical in a form that the physical methods can register, and that is why the methods can register them.

Viruses, on the strong reading, are not a second species of the parasite category. They are a category error the materialist methodology produced when it attempted to extend its reach into phenomena whose dual-layer character does not include a stable physical-layer instantiation. What the field calls a virus is, in the strong reading, a name the field has given to a combination of two different things the field has conflated: on one hand, the exosomal and apoptotic debris the stressed host cell produces as part of its own response to the terrain insult that is making the host sick; and on the other hand, the distributed morphic-resonance pattern by which terrain-disturbed populations come to share a symptom-pattern without any mechanical mediator the physical methods can identify. The two components are of entirely different ontological kinds, and the field’s conflation of them under a single label is the reason the methodology can never quite settle the question of whether its object is a particle or a process. The object is neither, in the strong reading, and the apparatus’s refusal to engage with the ontological question is the tell.

The morphic component of the reading is where the standard-materialist reader is permitted to exit if the reader insists. The rest of the reading stands on its own methodological grounds without requiring the morphic component. The morphic component is included because the honest accounting of the observed phenomenology — the population-scale coincidence of symptoms, the absence of reproducible mechanical transmission, the correlation with electromagnetic environmental changes, the suggestibility and psychosomatic dimensions of the illnesses that the clinical literature has always noted and always minimised — requires, at some point, a non-mechanical account of how the symptom-pattern propagates. The morphic-resonance framework developed by Rupert Sheldrake, the egregoric framework the western occult tradition has carried since antiquity, and the collective-unconscious framework of the jungian tradition all gesture at the same underlying mechanism from their different vocabularies. The mechanism is that conscious and sub-conscious attention, sustained and synchronized across a population under the right environmental and narrative conditions, produces the symptomatic crystallization the observers then interpret as contagion. The apparatus that wants the contagion reading is also the apparatus that supplies the synchronizing narrative and the sustaining attention. The apparatus manufactures the conditions under which its own explanatory framework generates the phenomena the framework is then invoked to explain. This is the rendering operating at the scale of the population’s physiological experience.

The Consciousness-Primacy Reading

Under the consciousness-primacy frame the entire architecture takes on a form the materialist reader will find extreme and the reader already inside the frame will find obvious. Reality is not the interaction of inert particles in a prior physical substrate that consciousness passively observes. Reality is what consciousness, distributed across many centres of attention and stabilised by the consensus mechanisms the rendering describes, collectively generates and sustains. The physical illnesses of the population are the downstream manifestations of the terrain disturbances the population is subjected to — electromagnetic, nutritional, toxic, psychic, and narrative disturbances whose effects are mediated at the level at which the physical body is itself rendered from the consciousness substrate. The pathogenic entities the germ-theory framework names are not the causes of the illnesses; they are, at most, the symptom-markers the disturbed consciousness projects into its own physical rendering as placeholders for the terrain disturbance the rendering is registering.

The pandemic-preparedness apparatus, in this frame, is not an apparatus that prepares for natural biological events that occur outside the frame. It is an apparatus that scripts the narrative component of the disturbance, uses the narrative to synchronise population attention around a symptom-pattern whose physiological crystallisation the attention then produces, delivers interventions whose biological effects are then interpreted as responses to the attributed causal entity, and harvests the affective currency of the induced dread for the legitimisation of the broader configuration the apparatus serves. The apparatus is not responding to the pandemics. The apparatus is running the pandemics, in the specific and operationally precise sense that the apparatus provides the narrative, environmental, and pharmaceutical inputs whose combination generates the population-scale physiological event, and then markets the event as an external threat whose management requires the apparatus’s own perpetuation. The bio-fear-grid is not a metaphor for the apparatus’s social function. It is the operational description of how the apparatus works at the level at which it actually works, which is the level of the consensus rendering through which the population’s bodies and the population’s beliefs are generated in the same motion.

The Annie Jacobsen Sequel

The reported follow-up project to Nuclear War: A Scenario is a book on the bio-warfare threat, scheduled to perform the same kind of fear-grid maintenance for the bio sector that the nuclear book performed for the nuclear sector. The structural prediction the theater-state frame permits is that the bio book will rely on insider sources from the same network of pandemic-preparedness professionals who ran event 201 and the covid response, will present a high-stakes scenario built around an engineered pathogen from a laboratory or terrorist source, will be reviewed favorably by the same press apparatus that promoted the nuclear book, and will produce the affective output of audience dread directed at the bio-fear-grid sector at exactly the moment the apparatus needs that sector primed for its next operational deployment.

The prediction does not require inside information about the book’s contents. It follows from the structural function the access journalist plays in the apparatus and the operational requirements of the sector the journalist services. The bio-fear-grid has been undergoing the same kind of post-active-cycle drift that the nuclear-fear-grid underwent in the 1990s and 2000s, when the audience’s attention to the cold-war nuclear scenario diminished and the apparatus needed periodic refresh through the close-call literature, the strategic-stability think-tank productions, and ultimately the Nuclear War: A Scenario refresh. The bio sector has just gone through its acute phase (2020-2023), is now in the post-acute drift, and is structurally due for the next priming refresh. The journalist whose access to the relevant insider network is established and reliable is the natural delivery vehicle for the refresh, and the refresh will arrive on the schedule the apparatus’s calendar requires. The refresh will be built around the engineered-pathogen scenario because the engineered-pathogen scenario is the only scenario that keeps the audience’s assent to the underlying ontology intact — that is, it keeps the audience believing that the class of object the apparatus prepares for exists, while directing their apprehension toward the variant of it that requires the apparatus’s continued funding.

The Withdrawal

The structural counter-operation, in the framework the theater-state typology supplies, is the withdrawal of attention from the bio-fear-grid as the affective object the apparatus needs the audience to feed, and the redirection of attention to the upstream apparatus the bio-fear-grid protects. The upstream apparatus is the institutional alignment between the public-health establishment, the pharmaceutical and biotechnology sectors, the philanthropic foundations whose grants police the field, the research universities whose careers depend on the alignment, and the regulatory bodies whose personnel rotate through all of those institutions in sequence. The structural grandparent of the contemporary configuration is the flexnerian capture of american medicine in the 1910-1925 window, and the continuity between that capture and the current pandemic-preparedness industry is not metaphorical: the same foundations, restructured and renamed, fund the same kind of institutional homogenization, produce the same kind of single-orthodoxy therapeutic framework, and enforce the same kind of exclusion of competing traditions. The bio-fear-grid object captures attention and protest energy that would otherwise travel upstream to the alignment, in the same structural sense that the mascot configuration captures attention in the geopolitical register and the nuclear deterrent narrative captures attention in the strategic-stability register. The mechanism is the same. The function is the same. The withdrawal is the same.

The withdrawal is also, at the deeper level, the withdrawal of the population’s consent to the ontology the apparatus requires the population to hold. A population that no longer believes in the reality of the class of object the apparatus prepares for is a population from which the apparatus cannot harvest the affective currency its operations run on. The withdrawal does not require the population to have settled the methodological questions to the satisfaction of any particular school. It requires the population to notice that the questions have never been settled to anyone’s satisfaction, that the field’s refusal to engage them is a structural refusal rather than a considered response to a closed debate, and that the refusal is the feature of the field that distinguishes it from the fields that are not running an operation. Terrain care — nutritional sufficiency, sunlight, sleep, environmental sanity, reduction of electromagnetic load, community connection, meaningful work, freedom from the synchronised narrative disturbance the apparatus generates — is the practice the withdrawal enables, and the practice is the real protection the bio-fear-grid was constructed to keep the audience from finding.

The counter-operation is not a claim that all illness is imaginary or that no biological intervention is ever warranted. It is a claim about the frame within which illness and intervention are read, and about the ontology of the objects the frame populates the world with. Under the right frame, the right interventions become available. Under the apparatus’s frame, only the apparatus’s interventions are permitted. The right frame is the one the apparatus has spent a century suppressing, and its recovery is the work the rendering section of the broader account is concerned with.

References

  • Schwab, Klaus, and Thierry Malleret. COVID-19: The Great Reset. Forum Publishing, 2020. The frank statement of the apparatus’s preferred narrative.
  • Johns Hopkins Center for Health Security, World Economic Forum, and Bill and Melinda Gates Foundation. Event 201 Pandemic Exercise: Highlights and Recommendations. Johns Hopkins, October 2019.
  • Chan, Alina, and Matt Ridley. Viral: The Search for the Origin of COVID-19. Harper, 2021.
  • Quay, Steven, and Richard Muller. “The Science Suggests a Wuhan Lab Leak.” Wall Street Journal, 6 June 2021.
  • Andersen, Kristian G., et al. “The Proximal Origin of SARS-CoV-2.” Nature Medicine 26 (2020): 450-452. The canonical paper whose suppression of the lab-leak hypothesis is documented in the FOIA-released correspondence between the authors.
  • U.S. Right to Know. FOIA-released correspondence on the proximal origin paper and the EcoHealth Alliance grants. Available at usrtk.org.
  • The White House. Lab Leak: The True Origins of COVID-19. April 2025. whitehouse.gov/lab-leak-true-origins-of-covid-19. The U.S. executive branch’s official position adopting the laboratory-origin hypothesis after five years of institutional resistance.
  • Rosenau, Milton J. “Experiments to Determine Mode of Spread of Influenza.” Journal of the American Medical Association 73, no. 5 (August 1919): 311-313. The failed-transmission experiments performed at Gallops Island, Boston, during the 1918-1919 period.
  • McCoy, George W., and Roy D. Richey. “Unsuccessful Attempts to Transmit Influenza Experimentally.” Public Health Reports 34, no. 27 (July 1919): 1411-1424. The Angel Island, San Francisco, parallel experiments.
  • Firstenberg, Arthur. The Invisible Rainbow: A History of Electricity and Life. Chelsea Green, 2017. The correlation between electromagnetic environment deployment and pandemic events.
  • Lanka, Stefan. Die Masern-Virus-Klage (The Measles Virus Case). 2011-2017. Documentation of the German court case on Kochian standards for the measles virus. The higher court ruling and federal supreme court affirmation are matters of public record in the German court system.
  • Cowan, Thomas, and Sally Fallon Morell. The Contagion Myth: Why Viruses (Including “Coronavirus”) Are Not the Cause of Disease. Skyhorse, 2020.
  • Kaufman, Andrew. The Rose/Rona Conversation. 2020-2024. Published interviews and papers challenging the isolation paradigm.
  • Stone, Mike. ViroLIEgy archival project. Available at viroliegy.com. A systematic review of the isolation and transmission literature from within the strong skeptic position.
  • Engelbrecht, Torsten, and Claus Köhnlein. Virus Mania. Trafford, 2007 (revised edition 2021, with Samantha Bailey and Michael Palmer). Book-length treatment of the methodological problems across the major declared viral diseases of the twentieth century.
  • Massey, Christine. FOI Responses on Viral Isolation. Ongoing compilation at fluoridefreepeel.ca of freedom-of-information agency responses declining to produce isolation documentation.
  • Hildreth, James. Quoted in Gould, Stephen J., et al. “The Trojan Exosome Hypothesis.” Proceedings of the National Academy of Sciences 100, no. 19 (2003): 10592-10597. The source for the “the virus is fully an exosome” formulation.
  • West, Jim. DDT/Polio. Available at harvoa.org. The correlation between organophosphate pesticide deployment and mid-century polio epidemics.
  • Béchamp, Antoine. The Blood and Its Third Anatomical Element. Translated by Montague R. Leverson. Originally published 1912.
  • Hume, E. Douglas. Béchamp or Pasteur? A Lost Chapter in the History of Biology. C. W. Daniel, 1923. The classic terrain-theorist account of the priority dispute.
  • Latour, Bruno. The Pasteurization of France. Harvard University Press, 1988. The science-studies treatment of how Pasteur’s institutional positioning shaped the resolution of the dispute.
  • Sheldrake, Rupert. A New Science of Life: The Hypothesis of Formative Causation. Blond & Briggs, 1981. The morphic-resonance framework referenced in the discussion of population-scale symptom crystallization.
  • Borger, Pieter, et al. “External Peer Review of the RTPCR Test to Detect SARS-CoV-2 Reveals 10 Major Scientific Flaws at the Molecular and Methodological Level.” Eurosurveillance (Corman-Drosten review report), November 2020.
  • Mandavilli, Apoorva. “Your Coronavirus Test Is Positive. Maybe It Shouldn’t Be.” New York Times, 29 August 2020.
  • Malone, Robert W. Lies My Gov’t Told Me: And the Better Future Coming. Skyhorse, 2022.
  • McCullough, Peter A., et al. “Pathophysiological Basis and Rationale for Early Outpatient Treatment of SARS-CoV-2 (COVID-19) Infection.” American Journal of Medicine 134, no. 1 (2021): 16-22.
  • Ioannidis, John P. A. “Reconciling Estimates of Global Spread and Infection Fatality Rates of COVID-19: An Overview of Systematic Evaluations.” European Journal of Clinical Investigation 51, no. 5 (2021).
  • Gøtzsche, Peter C. Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare. Radcliffe, 2013. The mainstream-credentialed institutional critique.

What links here.

6 INBOUND REFERENCES