Normal view

Received — 28 April 2026 The Conversation

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

When you're out and about, your face isn't just visible − it's captured. John Keeble/Getty Images

A woman strolls into a grocery store, thinking about grabbing some apples. Before she even reaches the produce aisle, a security camera has scanned her face. Whether the system is checking for shoplifters or simply logging her arrival, her face has joined a digital ledger, a trace she can’t easily erase. Retailers, banks, airports, stadiums and office buildings are doing the same.

But what if the woman’s facial information is stolen or misused? If a cybercriminal steals her password, she can change it. If they acquire her credit card number, she can cancel the card. But she can’t reset or revoke the appearance of her cheekbones.

Facial recognition systems don’t keep actual images. They convert a face into a mathematical template that maps the positions and proportions of the face’s features. When another camera scans a person later, the system checks their live face against these templates to confirm an identity.

In my work as a cybersecurity professor at Rochester Institute of Technology, I have found that even though templates are more secure than photos – which anyone online can capture and manipulate – templates, too, can be stolen. Once that happens, these digital keys create a lifelong vulnerability. If a facial recognition database is breached, the “locks” that a template opens – accessing a bank app, getting through security at an airport, entering an office building – can’t be reset. A person’s face is permanent, and so is the threat.

The threat isn’t theoretical. Biometric data has been stolen in data breaches. In 2024, biometric data from a facial recognition system used at bars and clubs in Australia was hacked. And in 2019, biometric data from a pilot facial recognition system set up by U.S. Customs and Border Protection was breached in an attack on a subcontractor’s network. It’s not clear whether anyone’s stolen biometric data has been exploited, however.

a sandwichboard sign outside a stadium
Catching a ballgame? Security cameras might be catching and digitizing your face. AP Photo/Matt Slocum

Tracking your face

All biometric identifiers carry risks. Fingerprints and iris scans, however, are typically used in controlled situations, such as unlocking a person’s phone or allowing someone to enter a building. In these cases, a person has to deliberately look at a scanner. Cameras in public spaces, in contrast, can capture faces as people walk by, from a distance and without the people whose faces are scanned realizing it.

If a fingerprint or iris database is breached, a thief still needs to physically present that finger or eye, or a fake of it, to a scanner. However, someone could match a stolen facial template against images from surveillance cameras or photos circulating online, making it easier to identify a person of interest or track someone’s movements and activities.

There’s also a big difference, technically and ethically, between keeping a face on a phone versus handing it over to a database. On modern Apple devices and many Android systems, biometric data used to unlock the devices is stored locally in a dedicated hardware chip and is not shared with the manufacturer or cloud services for authentication. As a result, a breach of corporate or cloud systems would not expose these device-level biometric templates.

Some street and security cameras in public are passive, just watching as people pass by, with no long-term records. But others may be following people’s steps, linking faces to databases and creating a persistent digital trail. The risk rises when organizations use systems to track particular people across multiple databases. Airport systems could compare a traveler’s face against passport or airline databases. Stadiums may compare faces against local security watch lists or law enforcement lists. The company that manages Madison Square Garden has used facial recognition to bar entry to lawyers at firms that represented people who sued the company.

Some large retail chains, such as Wegmans and Target, also use facial recognition systems in their theft prevention efforts. Every new capture adds another permanent record.

People hold small cardboard images of Amazon CEO Jeff Bezos in front of their faces.
Demonstrators hold images of Amazon CEO Jeff Bezos in front of their faces during a protest over the company’s facial recognition system. AP Photo/Elaine Thompson

Many companies do not have expertise in cybersecurity and rely on third-party vendors to manage their data. If those centralized systems are breached – or the datasets are linked across platforms, vendors or data brokers – your face can become a sort of persistent identifier, which can be used to expose or track you. In some cases, when combined with other compromised data, your captured face can lower the barrier to impersonating you.

When a person’s face meets their data

A face can function like a “primary key” – a unique and stable identifier that connects records. If one database links a facial template to an email address, and a data breach connects that email to financial or personal records, an identity thief with a stolen template could access all that information.

And combining a template with AI tools such as deepfakes or three-dimensional face models could, in some cases, allow a criminal to impersonate an individual in systems that require proof of a live face, slipping into a forged digital identity like slipping into a costume.

When criminals combine biometric templates with other leaked data, such as logins for social media profiles or home addresses, they can build “super-profiles” connected to many of a person’s activities. Because the face acts as a permanent linking key, this level of identity theft is difficult to reverse.

How to minimize the threat

People are still figuring out how to live with widespread biometric collection. The convenience of smoothly passing security checks or making purchases is appealing, but it often comes with a permanent risk to privacy and security.

To lessen the threat, organizations can follow several data privacy best practices. They can keep only information that is necessary, erase the rest quickly and encrypt every mathematical template. They can store only encrypted templates rather than raw photos. They can use safeguarding techniques, such as the latest liveness detection techniques, to help ensure that their systems are interacting with real people rather than photographs, masks or deepfakes. And they can adopt a privacy-by-design approach, which means they will keep data only as long as necessary, clearly document how it’s used and restrict who has access.

Consumers can take steps as well. In places with privacy laws, such as California, Illinois and the European Union, people can submit a data access request to see what biometric data a company holds and, in some cases, ask for its deletion. They can also ask retailers anywhere what data is collected, how long it is kept and how it’s protected.

The Conversation

Jonathan S. Weissman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Tapping your genome with AI and quantum computing could deliver on the promise of personalized medicine – but practical and ethical hurdles remain

While quantum computing has a long way to go, it can open tantalizing new doors for the field of genomics. herstockart/iStock via Getty Images Plus

Decades after researches first sequenced the human genome, scientists throughout the world are still working to understand it. Despite diligent global efforts to link uncommon variations in DNA sequences with human disease, progress has been slow, in large part due to limitations in scientific understanding and in part due to limitations in computational technologies.

Artificial intelligence has the potential to help scientists decipher the millions of genetic variations present in the genomes of different people in order to identify which ones lead to disease and which ones do not. In order to fully exploit the power of AI, however, scientists need to compare the genomes of thousands or tens of thousands of people. This task not only requires intense computational effort, it is also prone to error and will take years to complete.

Quantum computing has the potential to facilitate that process. We are researchers with a long-standing interest in finding ways to use genetics in the clinic and developing new technologies to study the human genome. Combining quantum computing with AI has the potential to accelerate genomic analysis far beyond traditional methods. For time-sensitive medical conditions, faster decoding of genetic information can directly inform urgent treatment decisions and, in some cases, be lifesaving.

Conventional vs. quantum computing

In conventional computing, individual bits of information – binary digits, also called bits – can represent only two states: namely, 0 and 1.

However, the qubits used in quantum computing can have more than two distinct states. Adding qubits together increases the number of states exponentially. The power of quantum computers lies in being able to check all the possibilities at once for problems with large numbers of variables, rather than one at a time like even the fastest possible classical computer must do. This allows quantum computers to solve certain types of problems, such as factoring large numbers for today’s encryption schemes and performing combinatorial optimization to find the best route through a large number of points.

Quantum computers work much differently from the computer you’re likely using to read this article.

Still, quantum computing is currently in its infancy. Despite the enormous potential of this technology, computer scientists are dealing with challenges related to its scalability, error correction, hardware development and the setting of standards.

There are also significant time and cost constraints associated with ameliorating these challenges. Experts in the field estimate that it may be at least a decade before quantum computing will be truly useful outside of the laboratory.

Bigger and better data analysis

If researchers are able to overcome these challenges, combining AI and quantum computing may not only enable scientists and clinicians to better understand the human genome but also to leverage that understanding to improve patient care.

Currently, researchers are able to use AI to analyze genomic data in combination with limited amounts of other biological information, such as gene activity, epigenomics, RNA signatures and protein function. Quantum computing could allow AI to process increasingly more massive and highly detailed datasets.

This might look like integrating large-scale genetic, protein and spatial datasets with clinical, demographic and real-time physiological data. This systems-level approach enables a more comprehensive and accurate understanding of complex biological systems beyond DNA sequence alone that could be used to improve public health.

In other words, quantum computing could make it possible to sequence a patient’s genome and combine that information with other information about how their body works at the molecular level to improve the accuracy of diagnoses and determine the best course of treatment in hours instead of months.

Challenges in access and privacy

Like many burgeoning technologies, combining AI with quantum computing has inherent and inescapable challenges. In particular, there are several ethical issues related to healthcare access.

One will be the cost. New technologies are typically expensive and that will likely widen the gap between those who can afford the best healthcare and those who cannot. Anticipating these costs and finding preemptive creative solutions is necessary to allow everyone to benefit equally.

While there are likely many approaches to reducing out-of-pocket expenses for healthcare, federal legislation could mandate affordable or free genetic information-based care to those in greatest financial need. Similar to the 2008 Genetic Information Nondiscrimination Act, which prohibits discrimination based on genetics, a new law could prohibit healthcare providers from withholding genetic information-based care from those who cannot afford it.

Close-up of face of person viewing computer screen, colorful DNA sequence reflected on their glasses
Biological data inherently comes with a privacy risk. Tek Image/Science Photo Library

Another challenge will be availability. These technologies will likely first be available at only the top medical centers in the country, which traditionally have the research funding and the cadre of skilled scientists and clinicians needed to develop new diagnostic methods and treatments. Consequently, the latest advances in health technology will be unavailable to people who physically or financially cannot travel to receive the best medical care.

A combination of telemedicine, centralized laboratories and shared data could potentially help make new technologies more accessible.

There are also privacy concerns intrinsic to sharing personal health data. Truly anonymizing personal information remains a challenge, and privacy concerns are likely to prevent some people from taking advantage of potentially lifesaving technologies.

One approach that may quell these fears is a model called federated blockchain governance. This approach involves sharing control of a blockchain, which is a digital ledger used to track transactions, among a small group of institutions rather than a single entity or the general public. Limiting the number of trusted curators of genetic data reduces the risk of privacy violations or security breaches and subsequently increases the chance that patient data will remain private.

Improving public health

Despite these challenges, combining advances in quantum computing and AI has the potential to significantly drive innovation and improve public health.

When scientists and clinicians are able to accurately identify the genetic basis of disease and potential risk factors, they will not only be able to develop better treatments but also help patients and healthcare providers know what symptoms to look for among those predisposed to certain conditions.

Taken together, this knowledge can improve public health, reduce the cost of healthcare and improve quality of life.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Received — 17 April 2026 The Conversation

Trump sidelined Congress’ authority over war on Iran – and lawmakers allowed it, extending a 75-year trend

Congress has not used its constitutionally granted power to influence the war in Iran. Bloomberg Creative via Getty Images

Lawmakers in the U.S. House of Representatives set April 21, 2026, as the date to hear from and question top Pentagon officials Adm. Brad Cooper, the head of U.S. Central Command, and Gen. Dagvin R.M. Anderson, head of U.S. Africa Command, about the war in Iran. But Republican legislators put off the hearing for a month, giving up – for now – the opportunity to exercise oversight of the war.

Adam Smith, the top Democratic member of the House Armed Services Committee, told The New York Times, “We are six weeks into this conflict. And we still haven’t gotten a public briefing from anyone in the administration about the war.”

President Donald Trump’s military campaign against the Iranian regime is currently in a ceasefire. Despite the low approval rating of the war, the president has not drawn the conflict to a close, and the result of the operation is so far unclear.

The postponed hearing was only one example of how Congress has been noticeably meek about the war, with most Republicans killing the many Democratic efforts to exercise constitutionally granted power over engaging in such military conflicts. For the fourth time, the Senate on April 16, 2026, rejected a war powers resolution.

As scholars who research war powers and have a book coming out about President Barack Obama’s decision-making about the Afghan war, we know that the reluctance of Congress to assert its power is, in fact, history repeating itself, as is the president’s unilateral action.

A man standing at a lectern flanked by flags, pointing into the audience of raised hands.
President Donald Trump and Defense Secretary Pete Hegseth conduct a news conference in the White House briefing room about the war in Iran on April 6, 2026. Tom Williams/CQ-Roll Call, Inc via Getty Images

Historically meek Congress

Article 1 of the U.S. Constitution gives Congress the power to declare war, not the president. But most modern presidents and their legal counsel have asserted that Article 2 of the Constitution allows the president to use the military in certain situations without prior congressional approval – and have acted on that, sending troops into conflicts from Panama to Libya with no regard for Congress’ will.

Based on the 1973 War Powers Resolution – passed over President Richard Nixon’s veto – the president has an obligation to inform Congress about his actions within 48 hours of initiating military action and requires him to seek legislative authorization if the military operation will last over 60 days.

Since its passage, presidents have dutifully informed Congress within the 48-hour window when they unilaterally initiate military operations. Typically, they use the following language: “Pursuant to” their power as commander in chief and chief executive, they are initiating an operation.

Yet presidents since Nixon have never formally acknowledged the constitutionality of the War Powers Resolution. They have, however, mentioned it in their letters to Congress about their actions, and for the most part they have abided by its restrictions. So language is crucial and presidents tend to use the phrase “consistent with” the War Powers Resolution when they inform Congress about military operations.

The second Trump administration has broken with that standard. In Trump’s message to Congress about the Iran war, sent on March 2 2026, he did not acknowledge the War Powers Resolution or the Constitution, let alone pay lip service to either.

Instead, Trump has sidestepped the traditional use of the War Powers Resolution – and avoided the congressional oversight that comes with it – by relying on executive orders to convey his intent to use military power against the Iranian regime. That move, whether legal or not, has provided the president with a great deal of freedom to decide what the military can do, what tools they can use to do it and how long they can do it. His decision to send another carrier group and the addition of thousands of U.S. troops to the region is just the latest example.

Congress has proved incapable or unwilling to check this presidential unilateralism. Shortly after the start of the military campaign against Iran, Democratic Sen. Chris Murphy introduced war powers legislation to constrain Trump that failed to pass the Senate. In the House on March 5, members narrowly rejected a resolution to impede a broader or longer operation.

To a meaningful extent, we are watching history repeat itself: Over the past seven decades during times of war, members of Congress have not wanted to act, and presidents have not wanted to ask permission.

From alacrity to deference

Presidents Woodrow Wilson and Franklin D. Roosevelt made their case for war and obtained a formal declaration from Congress within three days in 1917 and within the same afternoon in 1941, respectively.

Since the start of the Korean War, however, members of Congress have demonstrated more deference and less assertiveness.

In Korea, President Truman did not get congressional authorization for the war.

Following North Korea’s invasion of the South in June 1950, Truman bypassed Congress, making his case for war to the United Nations Security Council. In July 1950, United Nations Security Council Resolution 84 “authorized the United States to establish and lead a unified command comprised of all military forces from UN member states, and authorized that command to operate under the UN flag.”

A soldier with a gun ordering soldiers on the ground to do something.
U.S. soldiers in 1951 order Chinese prisoners to the ground outside Seoul, South Korea, before U.S. and U.N. troops took the city. AFP via Getty Images

Truman’s rhetoric about American combat operations on the Korean peninsula being part of a U.N. “police action” became increasingly tenuous, but he managed to avoid seeking congressional permission. In doing so, Truman created a precedent in which a congressional declaration of war was no longer necessary for the American military to carry out combat operations. Sen. Robert Taft, a Republican, opposed this lack of congressional deliberation, declaring that Truman’s actions represented a “usurpation” of the war powers authority.“ But Congress did nothing to stop the war as the tactical and strategic picture in Korea stalemated.

In Vietnam, in the aftermath of the 1964 Gulf of Tonkin incident – a purported attack by the North Vietnamese on American naval vessels that did not, in fact, occur – President Lyndon Johnson used the alleged crisis to push for congressional authorization for the escalation of force in Southeast Asia.

Johnson presented the Gulf of Tonkin Resolution to Congress, which quickly passed it. The resolution allowed Johnson to freely escalate American military involvement in Southeast Asia with a vague authorization to engage militarily as he saw fit, in contrast to the very clear declarations of war that came before it for previous wars.

Col. Harry G. Summers, who wrote an influential strategic analysis of the Vietnam War, points to the Gulf of Tonkin Resolution as evidence that the relevant actors – the executive, Congress and the military – failed to foresee the scale of the course of action they were embarking on.

The resolution significantly increased the president’s freedom of action – and freedom from oversight – and marked a major step toward the Americanization and escalation of the war in July 1965. Despite the deeply troubled engagement in South Vietnam and the passage of the War Powers Resolution, we still see presidents acting alone, without consulting members of Congress, let alone getting authorization.

Refusing responsibility

In Summers’ Vietnam postmortem, he relates a telling anecdote of a professor at West Point. The professor, an Army officer, remarked, "When people ask me why I went to Vietnam I say, ‘I thought you knew. You sent me,’” a comment indicative of “the civilian sector’s growing refusal to take responsibility for the kind of army it needs.”

In the case of Trump’s decision-making concerning hostilities with Iran, Americans will one day need answers to the questions: Why did the United States engage in this war with unclear political objectives? And why did Congress allow it to continue?

This story contains material from an article published on March 6, 2026.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Received — 14 April 2026 The Conversation

Antibiotics can trigger bacteria to release bubbles of inflammation tinder, making it harder to treat infection

_E. coli_ is mostly harmless and sometimes beneficial – but some strains can cause serious infection. Photo by Eric Erbe, Colorization by Christopher Pooley/USDA ARS

Antibiotics are designed to kill harmful bacteria and help the body recover from infection. But some antibiotics may also push bacteria to release tiny particles that can make inflammation worse.

While inflammation is part of the body’s natural defense against infection, too much inflammation can damage healthy tissue and interfere with healing. In severe cases, excessive inflammation can become life-threatening.

These particles are called bacterial extracellular vesicles, or BEVs. These microscopic, bubblelike structures carry proteins, toxins and other molecular signals that influence how the immune system of the host responds. Bacteria naturally release BEVs into their surroundings as a way to communicate with their environment, remove damaged cellular material and interact with host cells.

Although incredibly small, these structures can have powerful effects on the human body. When BEVs enter the bloodstream, they can interact with cells that line blood vessels and trigger an immune response. In some cases, this can increase inflammation and lead to sepsis, a condition where the body’s response to infection becomes dangerously uncontrolled, damaging tissues and sometimes leading to organ failure.

I am a biomedical engineer studying how bacterial extracellular vesicles influence inflammation during infection. In my recently published research, I found that certain types of antibiotic cause bacteria to release significantly more of these vesicles than others. This finding suggests that the way an antibiotic kills bacteria may also influence how much inflammatory material is released into the body.

When antibiotics stress bacteria

Antibiotics work in different ways. Some target the bacterial cell wall, weakening it until the cell breaks apart and dies. Others interfere with key cellular processes such as protein production or DNA replication, preventing bacteria from growing. Whatever their mechanism, antibiotics control infection by killing the bacteria that are causing it.

But antibiotics also place bacteria under stress, and that stress can cause bacteria to release more extracellular vesicles carrying inflammatory molecules. To explore this process, I exposed the bacteria E. coli to several commonly used antibiotics and measured how many vesicles they made. The goal was simple: Compare how different types of antibiotics influence vesicle release and determine whether the way an antibiotic kills bacteria affects vesicle production.

Diagram of a large spherical sac containing various molecules targeted by antibiotics beta-lactam, amino-glycoside and quinolone
Antibiotics not only kill bacteria in different ways, they also interact with bacteria extracellular vesicles in different ways. CC BY-NC-ND

The results showed that not all antibiotics have the same effect on the vesicles bacteria produce.

Antibiotics that target the bacterial cell wall, including a widely used group of drugs known as beta-lactams, led to a noticeable increase in vesicle production. In contrast, antibiotics that act on protein or DNA processes showed a much smaller effect.

This difference likely reflects how bacteria respond to damage. When the bacteria’s cell wall is disrupted, bacteria may release more vesicles as a way to shed damaged material or adapt to stress. The inflammatory molecules these vesicles carry can further activate the body’s immune response.

This raises an important question: Could some antibiotics unintentionally amplify inflammation and make an infection worse?

My findings do not show that antibiotics directly contribute to infections, but they do suggest that antibiotic type could potentially influence not only how effectively bacteria are killed but also how the body responds to the infection. More research is needed to understand how these bacterial responses affect patients during severe infections, such as sepsis.

Why this matters for treating infections

It is important to emphasize that antibiotics remain one of the most effective and lifesaving tools in modern medicine. This research does not suggest they should be avoided. Instead, it highlights that bacteria are not passive targets. They actively respond to treatment, and those responses can have additional effects on the body.

Understanding how bacteria react to antibiotics could help researchers and clinicians better evaluate how different treatments influence both infection and inflammation. In situations where controlling inflammation is critical, such as severe infections, these differences may become especially important.

This work also reflects a broader shift in how scientists think about infection. Rather than focusing only on killing bacteria, researchers are increasingly studying how bacteria communicate, respond to stress and interact with the human body.

As scientists continue to uncover how bacteria behave under antibiotic pressure, it becomes clear that treating infection is not only about stopping bacterial growth but also about understanding the signals bacteria leave behind.

The Conversation

Panteha Torabian receives funding from NIH.

Received — 13 April 2026 The Conversation

Artemis II crew brought a human eye and storytelling vision to the photos they took on their mission

Astronaut Jeremy Hansen takes a picture through the camera shroud covering a window on the Orion spacecraft. NASA

In early April 2026, the Artemis II mission captivated me and millions of people watching from across the world. The crew’s courage, skill and infectious wonder served as tangible proof of human persistence and technological achievement, all against the mysterious backdrop of space.

People back on Earth got to witness the mission through remarkable photos of space captured by astronauts. Images created and shared by astronauts underscore how photography builds a powerful, authentic connection that goes beyond what technology alone can capture.

As a photographer and the director of the Rochester Institute of Technology’s School of Photographic Arts and Sciences, I am especially drawn to how these photographs have been at the center of the public’s collective experience of this mission.

In an era when image authenticity is often questioned and with the capabilities of autonomous, AI-driven imaging, NASA’s choice to train astronauts in photography has placed meaning over convenience and prioritized their human perspectives and creativity.

Capturing space from the crew’s perspective

Photography was not originally placed as a high priority in NASA’s Apollo era. The astronauts only took photographs if they had the chance and all their other tasks were complete.

An image of the entire Earth from space.
‘The Blue Marble’ view of the Earth as seen by the Apollo 17 crew in 1972. NASA

Thanks largely in part to public response to those images from Apollo, including “Earthrise” and the “Blue Marble” being widely credited for helping catalyze the modern environmental movement, NASA shifted its approach to utilize photography to help capture the public’s imagination by training their astronauts in photographic practices.

The Artemis II mission’s photographs have helped cut through the increasing volume of artificially generated images circulating on social media. NASA’s social media releases of the crew’s photographs have garnered thousands of shares and comments.

This excitement could be explained by the novelty of photos from space, but these images also distinguish themselves as products of astronauts experiencing these sights and interpreting them through their photographs. These differences require an important distinction around where technology ends and humanity begins.

An astronaut looking out the window of the Orion spacecraft, where the full moon is visible in space.
NASA astronaut Reid Wiseman watches the Moon from one of the Orion spacecraft’s windows. NASA

Human perspective versus AI tools

Photography has long integrated AI-powered software and data-driven tools in a variety of ways: to process raw images, fill in missing color information, drive precise focus and guide image editing, among others. These modern technological assists help human photographers realize their vision.

Artificial intelligence is also increasingly capable of operating machinery competently and autonomously, from cars to drones and cameras.

And AI can generate convincing, realistic images and videos from nothing more than a text prompt, using readily available tools.

Researchers train AI to mimic patterns informed by millions of sample images, and the algorithm can then either take or create a photograph based on what it predicts would be the most likely version of a successful, believable image.

Human-created photos are rooted in direct observation, intent and lived experience, while AI images – or choices made by AI-driven tools – are not. While both can produce compelling and believable visuals, the human photographs carry emotional power because the photographer is drawing from their experiences and perspective in that moment to tell an authentic story.

Artemis II photographs resonate, not only because they are historic, but because they reflect the deliberate choices and intent of a human being in that specific moment and context. The exposure, camera setting, lens choice and composition are all dictated by the astronaut’s vision, skill, perspective and experience. Each image is unique in comparison with the others. These choices give the images narrative power, anchoring them in human perspective.

The Earth shown partially shadowed beyond the Moon in space
NASA’s ‘Earthset’ photo captured by the Artemis II crew. NASA

Images to tell a story

Photographers choose what to include in the final version of their image to tell a story. In the Artemis II images, this human perspective comes out. In the “Earthset” photo, you see a striking juxtaposition of the Moon’s monochromatic, textured surface in the foreground against a slivered, bright Earth.

The choice to include both in the frame contrasts these objects literally and figuratively, inviting comparison. It creates a narrative where Earth is contrasted against the Moon – life is contrasted against the absence of it.

Another photo shows the nightside of the whole Earth, featuring the Sun’s halo, auroras and city lights. The choice to include the subtle framing of the window of the capsule in the lower left corner reminds the viewer where and how this image was captured: by a human, inside a capsule, hurtling through space. That detail grounds the photograph in the human perspective.

Both photos are reminiscent of Earthrise and the Blue Marble. These past images hold a place in the global collective consciousness, shaped by a shared historical moment.

The Artemis II photographs are anchored in this collective moment of lived human experience, yet also shaped by each astronaut’s viewpoint. The crew’s unique perspectives exemplify photography’s transformative power by inviting viewers to engage emotionally and intellectually with their journey. These photographs share the astronauts’ awe and wonder and affirm the value of human creativity and its ability to connect us in a captured moment.

The Conversation

Christye Sisson has received funding from the US government for research in media forensics.

Received — 9 April 2026 The Conversation

Bypass the Strait of Hormuz with nuclear explosives? The US studied that in Panama and Colombia in the 1960s

A nuclear bomb explodes at Bikini Atoll in the Pacific Ocean in 1946, one of several U.S. test explosions. Photo12/Universal Images Group via Getty Images

With the world struggling to get oil supplies moving from the Middle East, former House Speaker Newt Gingrich raised eyebrows with a social media post highlighting a radical idea: Use nuclear bombs to cut a new channel along a route that would avoid Iranian threats in the Strait of Hormuz.

Gingrich’s March 15, 2026, post linked to an article that labeled itself as satire. Gingrich has not clarified whether his endorsement was serious. But he is old enough to remember when ideas like this were not only taken seriously but actually pursued by the U.S. and Soviet governments.

As I discuss in my book, “Deep Cut: Science, Power, and the Unbuilt Interoceanic Canal,” the U.S. version of this project ended in 1977. At the time, Gingrich was launching his political career after working as a history and environmental studies professor.

Improving global trade and geopolitical influence

The idea for a new canal to move oil from the Middle East had emerged two decades earlier, in the context of another Middle East conflict, the Suez crisis. In 1956, Egypt seized the Suez Canal from British and French control. The canal’s prolonged closure caused the price of oil, tea and other commodities to spike for European consumers, who depended on the shipping shortcut for goods from Asia.

But what if nuclear energy could be harnessed to cut an alternative canal through “friendly territory”? That was the question asked by Edward Teller, the principal architect of the hydrogen bomb, and his fellow physicists at the Lawrence Radiation Laboratory in Livermore, California.

Partially sunken ships block a waterway.
Scuttled ships block one end of the Suez Canal in 1956, sparking an international outcry and conflict. Horace Tonge/NCJ Archive/Mirrorpix via Getty Images

President Dwight D. Eisenhower’s administration had already begun promoting atomic energy to generate electricity and to power submarines. After the Suez crisis, the U.S. government expanded plans to harness “atoms for peace.”

Project Plowshare advocates, led by Teller, sought to use what they called “peaceful nuclear explosions” to reduce the costs of large-scale earthmoving projects and to promote national security. They envisioned a world in which nuclear explosives could help extract natural gas from underground reservoirs and build new canals, harbors and mountainside roads, with minimal radioactive effects.

To kick-start the program, Teller wanted to create an instant harbor by burying, and then detonating, five thermonuclear bombs in an Indigenous village in coastal northwestern Alaska. The plan, known as Project Chariot, generated intense debate, as well as a pioneering environmental study of Arctic food webs.

Teller and the Livermore physicists also worked with the Army Corps of Engineers to study the possibility of using nuclear explosions to build another waterway in Panama. Fearing that the aging Panama Canal and its narrow locks would soon be rendered obsolete, U.S. officials had called for building a wider, deeper channel that wouldn’t require any locks to raise and lower the ships along its route.

A sea-level canal would not only fit bigger vessels; it would also be simpler to operate than the lock-based system, which required thousands of employees. Since the early 1900s, U.S. canal workers and their families had lived in the Canal Zone, a large strip of land surrounding the waterway. Panamanians increasingly resented having their country split in two by the racially segregated, colony-like zone.

A group of people holding hand tools stand next to a large pile of soil.
Building the Panama Canal involved backbreaking manual labor. Bettmann via Getty Images

Crossing Central America

Nuclear explosions appeared to make a new sea-level canal financially feasible. The greatest impetus for the so-called Panatomic Canal occurred in January 1964, when violent anti-U.S. protests erupted in Panama. President Lyndon B. Johnson responded to the crisis by agreeing to negotiate new political agreements with Panama.

Johnson appointed the Atlantic-Pacific Interoceanic Canal Study Commission to determine the best site to use nuclear explosions to blast a seaway between the two oceans. Funded by a $17.5 million congressional appropriation – the equivalent of around $185 million today – the five civilian commissioners focused on two routes: one in eastern Panama and the other in western Colombia.

The Panamanian route spanned forested river valleys of the Darién isthmus and reached 1,100 feet above sea level. To excavate this landscape, engineers proposed setting off 294 nuclear explosives along the route, in 14 separate detonations, using the explosive equivalent of 166.4 million tons of TNT.

This was a mind-blowing amount of energy: The most powerful nuclear weapon ever tested, the Soviet “Tsar Bomba” blast in 1961, released the energy equivalent to 50 million tons of TNT.

To avoid the radioactivity and ground shocks, planners estimated that approximately 30,000 people, half of them Indigenous, would have to be evacuated and resettled. The canal commission considered this a formidable but not impossible obstacle, writing in its final report, “The problems of public acceptance of nuclear canal excavation probably could be solved through diplomacy, public education, and compensating payments.”

In 2020, the Russian government declassified this footage of the “Tsar Bomba” test blast from 1961.

A not-so-hot idea, in retrospect

As explored in my book, marine and evolutionary biologists of the late 1960s sought to study the project’s less obvious environmental effects. Among other potential catastrophes, scientists warned that a sea-level canal could unleash “mutual invasions of Atlantic and Pacific organisms” by joining the oceans on either side of the isthmus for the first time in 3 million years.

Plans for the nuclear waterway ended by the early 1970s, not over concerns about marine invasive species but rather due to other complex issues. These included the difficulties of testing nuclear explosions for peaceful purposes without violating the Limited Nuclear Test Ban Treaty of 1963 and the huge budget deficits caused by the Vietnam War.

Despite the geopolitical and financial constraints, the sea-level canal studies employed hundreds of researchers who increased knowledge of the isthmus and its human and nonhuman inhabitants. Ironically, the studies revealed that wet clay shale rocks along the Darién route meant nuclear explosives might not work well there.

The cover of a bound book.
The cover of the final report of a commission that studied blasting a canal across Central America with ‘peaceful nuclear explosions.’ Atlantic-Pacific Interoceanic Canal Study Commission via University of Florida

But for Project Plowshare’s biggest proponents, atomic excavation remained a worthwhile goal. In 1970, in their final report, the canal commissioners predicted that “someday nuclear explosions will be used in a wide variety of massive earth-moving projects.” Teller shared their commitment, as he explained near the end of his life in the 2000 documentary “Nuclear Dynamite.”

Today, given widespread awareness of the severe environmental and health effects of radioactive fallout, it is hard to envision a time when using nuclear bombs to build canals seemed reasonable. Even before Gingrich’s post sparked ridicule, press accounts described Project Plowshare using words like “wacky,” “insane” and “crazy.”

However, as societies struggle with disruptive new technologies such as generative AI and cryptocurrency, it is worth remembering that many ideas that ended up discredited once seemed not only sensible but inevitable.

As historians of science and technology point out, technological and scientific developments cannot be separated from their cultural contexts. Moreover, the technologies that become part of people’s daily lives often do so not because they are inherently superior, but because powerful interests champion them.

It makes me wonder: Which of the high-tech trends being promoted by influencers today will amuse, shock and horrify our descendants?

The Conversation

Christine Keiner received funding from the National Endowment for the Humanities, Lyndon Baines Johnson Foundation, and Eisenhower Foundation for the initial stages of this research.

❌