14.6 C
Munich
Wednesday, October 16, 2024

Meet Sturgeon, the AI tool that helps doctors identify brain tumors faster than ever

Must read

The problem with neurosurgery,” says Eelco Hoving, a pediatric brain surgeon, “is that it is very unforgiving.” Even the specialists often have to start by cutting into someone’s head to get a better idea of what they’re treating.

In the case of neurological tumors, for instance, you often don’t know what you’re dealing with until you remove a flap of skull and biopsy a small portion of brain tissue for analysis. That’s how things work at the Princess Máxima Center, a partner of UMC Utrecht, one of the biggest research hospitals in the Netherlands, where Hoving is the clinical director of neuro-oncology. The sample is then sent to the lab, where two things happen. The pathologists sequence and profile the brain tissue and attempt to identify what kind of tumor is present, a laborious process that can take a week, often longer. In parallel, the lab takes a small cross section of the sample, freezes it, and thinly slices it with a scalpel—effectively taking a “frozen snapshot,” says Hoving—and then reviews it under a microscope in a process called a quick section. While a quick section can help identify what kind of tumor is present in just 15 to 20 minutes, it’s far less reliable than the slower method.

This leaves neurosurgeons with a dilemma as the patient lies there, brain exposed. A series of tricky determinations is made with imperfect information: Is there actually a tumor here? And if it is in fact cancer, is it an aggressive form that needs to be excised quickly? Or is it a milder tumor that can be treated with something less invasive, like chemotherapy?

Hoving specializes in operating on kids and teenagers, so he understands these limitations viscerally.

He remembers operating on a young patient a few years ago. The quick section indicated a highly malignant embryonal tumor called an ATRT. Because ATRTs are aggressive, Hoving decided the best course of action was to respond aggressively in turn. He made the call to perform a radical resection, carefully taking out more than 98 percent of the tumorous tissue—a deliberate and mentally draining process that requires unblinking concentration for hours on end. As a result of the procedure, the patient lost some motor control in one of his arms.

(AI may be key to solving the most neglected women’s health issues.)

But when the lab results came back 10 days later, the pathology report showed the tumor wasn’t actually an ATRT at all; it was something far milder. “It happened to be a germinoma,” recalls Hoving, “and that could be treated very effectively with radiation and chemotherapy.” He had made the best call he could with the limited information available: “I tried to do a radical resection with the best intentions, but in hindsight, I shouldn’t have done that.”

Hoving is now part of a research team at Princess Máxima that since summer 2023 has experimented with artificial intelligence to identify tumors in real time. The team is using an AI model that it’s dubbed Sturgeon, which can categorize brain tumors with 90 percent accuracy in 40 minutes or less—enough time for a surgeon to make an informed decision while the patient is under the knife. “Pathologists still review every single slide,” says Bastiaan Tops, head of the child cancer pathology lab at Princess Máxima. The AI simply provides more information, another input.

Three gloved hands hold surgical tools as they operate on a brain. The head of the patient is covered, only a small opening of the brain can be seen.

IDENTIFYING TUMORS, SAVING LIVES: A team of doctors operates on a patient this year at the Netherlands’ Princess Máxima Center, where AI is regularly used to help medical teams diagnose tumors faster and more efficiently.

Photograph by Luca Locatelli

The project’s genesis can be traced back to early 2022, when Tops caught wind that one of his colleagues on campus, Jeroen de Ridder, principal investigator and associate professor at the Center for Molecular Medicine, was making strides in molecular sequencing using a new and relatively affordable device called a nanopore sequencer, which can read strands of DNA.

Tops had a light-bulb moment: What if they could combine this sequencer with some sort of advanced learning algorithm to radically speed up tumor identification?

(Does trusting your doctor’s gut feeling lead to better care?)

Tops called de Ridder to see if he’d be interested in chatting. “He said he saw some application of nanopore sequencing for ultrarapid diagnosis,” remembers de Ridder. And since the campus is enviably small—getting anywhere is a five-minute walk at most—he strolled over to Tops’s office. “We sat together, and we started brainstorming what that might entail.”

The nanopore sequencer is a small device that starts at $2,000—cheap in medical terms, thus promising for hospitals in developing nations. It looks like a stapler and hooks up to a laptop via USB; not futuristic-looking at all, in other words. It works by running a strand of DNA through a membrane that has tiny holes, or nanopores, inside it. Each nanopore is associated with an electrode and a sensor that records precise disruptions to the system’s electrical current as the strand moves past the holes. The result is a unique signature—each strand’s “squiggle”—that can be decoded into a base sequence. Simultaneously, researchers can deploy Sturgeon to identify what type of cancer is present.

You May Also Like

The big obstacle, as with any identification software that uses AI, like Google Reverse Image Search, is that one is dealing with fragments of incomplete data, in this case on a molecular level. De Ridder likes to describe the work with a more specific example: “The challenge that the AI needs to solve is, if I show you a picture of an elephant, does a computer recognize what’s in the picture?” Let’s say you only have one percent of the picture—maybe a few gray pixels of the elephant’s trunk—and the other 99 percent is unknown or inscrutable. “Can we now make an AI that can still recognize that there’s an elephant in the picture?” he asks. “And that’s the AI that we developed. Ultimately, that’s what it does.”

The other fundamental quandary, especially in the case of pediatric brain tumors, is that hospitals may handle fewer than a hundred cases a year, which creates a data sparsity problem. With AI, you need a database in the thousands of cases to even begin training something like Sturgeon to perform tumor identification. (Compare that with ChatGPT, which trains itself on billions of freely available sentences on the internet.) How do you reconcile that small sample size with the need for unfathomably vast datasets? For de Ridder and Tops, it meant getting creative.

The pair pulled data from existing tumor samples found in previously published studies. Even then, they were operating at a deficit. “Well, we had about 3,000 samples,” de Ridder explains. “So not a whole lot.”

But from those 3,000 samples, they were able to fabricate simulations for millions of unique nanopore sequences that they used to train Sturgeon—similar to how Neo in The Matrix gets centuries of kung fu training uploaded to his brain. “We did this 45 million times total to get to a dataset that has the volume required to train very complex networks,” says de Ridder. “And lo and behold, that appeared to work.”

While Sturgeon is already being used in a research capacity to help with real-time decision-making, the Princess Máxima team is designing clinical trials to better understand Sturgeon’s impact. In theory, molecular sequencing could be broadened to help identify diseases and conditions beyond brain tumors: melanomas, fungal infections in the lungs, rare blood disorders like myelofibrosis. Using DNA to instantly recognize rare or difficult-to-diagnose ailments could radically reshape the landscape of medicine. Within the field of neurosurgery, some scientists are already theorizing that AI could be paired with surgical robots to automate complex procedures. Meanwhile, researchers at Harvard and Google recently produced the first 3D map of one cubic millimeter of brain tissue, which may offer even more ways to understand why we think how we do, when something may be cognitively amiss, or even how we experience emotion.

(More people are turning to mental health AI chatbots. What could go wrong?)

But progress is iterative. Slow by design. Medical regulators still need to be satisfied that Sturgeon, and technology like it, is safe, which could take five years or more. “We have to prove it,” says Hoving. “[We have] to give it a background that is really trustworthy.”

Though initially an AI neophyte, Hoving has become an evangelist for the possibilities AI can offer, particularly from an augmentative standpoint. Imagine, in 10 to 15 years, a neurosurgeon could wear a pair of AI-enabled glasses that would be able to pinpoint and identify cancers in real time: Terminator vision for tumor hunting.

“I think there’s a lot of technology, especially in imaging and in this mixed-reality type of thing, that will help us,” says Hoving. It will ultimately be up to neurosurgeons to make the final determination, as they always have—but they’ll be able to do so with far less guesswork.

(7 medical breakthroughs that gave us hope in 2023, including AI use.)

This story appears in the November 2024 issue of National Geographic magazine.

Making his home in New York City, Chris Gayomali is a former articles editor at GQ magazine and now writes the health and wellness newsletter Heavies.

Read More

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article