Skip to main content

Notebook Technology

Courtesy of FAIR and NYU School of Medicine

Dr. Daniel Sodickson (right) and other NYU School of Medicine researchers examine an MRI. (Courtesy of FAIR and NYU School of Medicine)


A smarter scanner

Researchers believe artificial intelligence can improve MRI speeds

Magnetic resonance imaging, the gold standard of modern diagnostic imaging, provides doctors with greater detail than other imaging techniques, such as X-ray or CT scans. But MRI scans take a long time—anywhere from 15 minutes to an hour or more. All the while, the patient must lie still inside a claustrophobia-inducing metal tube.

That scenario could change. Facebook’s Artificial Intelligence Research (FAIR) group and the New York University School of Medicine recently announced a collaborative research project that will use machine-learning techniques to make MRI scans up to 10 times faster.

“Using AI, we believe it may be possible to capture less data and therefore image faster, while still preserving or even enhancing the rich information content of MR images,” Dr. Daniel Sodickson, vice chair for research in radiology at NYU School of Medicine, told Forbes.

Facebook researchers will train their “fastMRI” model using an NYU-provided data set of 3 million images of the knee, brain, and liver. The challenge, according to a Facebook blog, is to get the artificial intelligence network to “recognize the underlying structure of the images in order to fill in views omitted from the accelerated scan” and to bridge those gaps without sacrificing accuracy. Missing a few key pixels could mean the difference between a clear scan and one showing an anomaly such as a torn ligament or a tumor.

If the project is successful, the researchers believe the benefits of faster MRIs would extend beyond a more comfortable patient experience.

“You also get increased accessibility in areas with MRI shortages and you can get improved image quality when you’re trying to image things that move fast, like the heart,” Sodickson told Forbes. “If we can get it fast enough to replace X-rays or CT [scans] then we can also reduce radiation exposure for the population while still getting the critical medical information.”

Share this article with friends.

Jeff Chiu/AP

An autistic child interacts with his mother at the Wall Lab in Stanford, Calif. (Jeff Chiu/AP)


Help on the spectrum

Could the Google Glass device benefit kids with autism?

Children with autism seemingly improved their social skills after using a wearable device that helped them recognize emotions in people’s facial expressions, according to new research.

In a pilot study in NPJ Digital Medicine in August, a team at the Stanford University School of Medicine developed a smartphone app and paired it with Google Glass to provide real-time cues to kids about facial expressions. The device, worn like a pair of glasses, has a camera that records the wearer’s field of view, as well as a small audio speaker. As the child interacts with other people, the camera captures those people’s facial expressions and the app identifies and names their emotions through the Google Glass speaker.

The study authors based the therapy on a well-studied autism treatment called applied behavior analysis, in which a trained practitioner teaches emotion recognition using flash card exercises.

“We have too few autism practitioners,” study senior author Dennis Wall, associate professor of pediatrics and biomedical data science, told Stanford Medicine News. “The only way to break through the problem is to create reliable, home-based treatment systems. It’s a really important unmet need.”

Share this article with friends.




Quick claims

Could artificial intelligence replace insurance claims adjusters?

Your insurance agent may not have been replaced by a computer, but artificial intelligence (AI) may soon take over one of the most critical jobs in the insurance business: accident and disaster appraisal.

Tractable, a London-based startup, is using machine learning to train an AI system to conduct visual damage assessment with the goal of speeding up insurance payouts and access to governmental disaster relief funds.

“Our belief is that when accidents and disasters hit, the response could be 10 times faster thanks to AI,” Alexandre Dalyac, Tractable CEO and co-founder, told TechCrunch. “Everything from road accidents, burst piping to large-scale floods and hurricane.”

The traditional process of releasing claims funds after an automobile accident, for example, begins with a visual damage appraisal by an experienced claims adjuster, a process that can take days or weeks. Tractable claims its AI, which was trained on millions of images of vehicle damage, can do the same job in minutes. It works by policyholders sending photos of the damage to their insurance company, which can then use Tractable’s AI program to instantly estimate the repair cost.

Dalyac conceded that it’s a difficult machine learning problem to correlate exterior photos with internal damage, something an experienced human adjuster can do easily.

“Our AI has already been trained on tens of millions of these cases, so that’s a perfect case of us already having distilled thousands of people’s work experience,” he said. “That allows us to get hold of some very challenging correlations that humans just can’t do.”

Share this article with friends.