How to predict the side effects of millions of drug combinations.

Doctors have no idea, but Stanford University computer scientists have figured it out, using artificial intelligence

July 11, 2018

An example graph of polypharmacy side effects derived from genomic and patient population data, protein–protein interactions, drug–protein targets, and drug–drug interactions encoded by 964 different polypharmacy side effects. The graph representation is used to develop Decagon. (credit: Marinka Zitnik et al./Bioinformatics)

Millions of people take up to five or more medications a day, but doctors have no idea what side effects might arise from adding another drug.*

Now, Stanford University computer scientists have developed a deep-learning system (a kind of AI modeled after the brain) called Decagon** that could help doctors make better decisions about which drugs to prescribe. It could also help researchers find better combinations of drugs to treat complex diseases.

The problem is that with so many drugs currently on the U.S. pharmaceutical market, “it’s practically impossible to test a new drug in combination with all other drugs, because just for one drug, that would be five thousand new experiments,” said Marinka Zitnik, a postdoctoral fellow in computer science and lead author of a paper presented July 10 at the 2018 meeting of the International Society for Computational Biology.

With some new drug combinations (“polypharmacy”), she said, “truly we don’t know what will happen.”

How proteins interact and how different drugs affect these proteins

So Zitnik and associates created a network describing how the more than 19,000 proteins in our bodies interact with each other and how different drugs affect these proteins. Using more than 4 million known associations between drugs and side effects, the team then designed a method to identify patterns in how side effects arise, based on how drugs target different proteins, and also to infer patterns about drug-interaction side effects.***

Based on that method, the system could predict the consequences of taking two drugs together.

To evaluate the The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub. system, the group looked to see if its predictions came true. In many cases, they did. For example, there was no indication in the original data that the combination of atorvastatin (marketed under the trade name Lipitor among others), a cholesterol drug, and amlopidine (Norvasc), a blood-pressure medication, could lead to muscle inflammation. Yet Decagon predicted that it would, and it was right.

In the future, the team members hope to extend their results to include more multiple drug interactions. They also hope to create a more user-friendly tool to give doctors guidance on whether it’s a good idea to prescribe a particular drug to a particular patient, and to help researchers developing drug regimens for complex diseases, with fewer side effects.

Ref.: Bioinformatics (open access). Source: Stanford University.

* More than 23 percent of Americans took three or more prescription drugs in the past 30 days, according to a 2017 CDC estimate. Furthermore, 39 percent over age 65 take five or more, a number that’s increased three-fold in the last several decades. There are about 1,000 known side effects and 5,000 drugs on the market, making for nearly 125 billion possible side effects between all possible pairs of drugs. Most of these have never been prescribed together, let alone systematically studied, according to the Stanford researchers.

** In geometry, a decagon is a ten-sided polygon.

*** The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub.

source: ,

Spotting Image Manipulation with AI

Twenty-eight years ago, Adobe Photoshop brought the analog photograph into the digital world, reshaping the human relationship with the image. Today, people edit images to achieve new heights of artistic expressionto preserve our history and even to find missing children. On the flipside, some people use these powerful tools to “doctor” photos for deceptive purposes. Like any technology, it’s an extension of human intent, and can be used for both the best and the worst of our imaginations.

In 1710 Jonathan Swift wrote, “Falsehood flies, and the truth comes limping after it.” Even today, as a society, we’ve struggled to understand the way perception and belief are shaped between authenticity, truth, falsehood and media. Add newer social media technologies to the mix, and those falsehoods fly faster than ever.

That’s why, in addition to creating new capabilities and features for the creation of digital media, Adobe is exploring the boundaries of what’s possible using new technologies, such as artificial intelligence, to increase trust and authenticity in digital media.

AI: a new solution for an old problem

Vlad Morariu, senior research scientist at Adobe, has been working on technologies related to computer vision for many years. In 2016, he started applying his talents to the challenge of detecting image manipulation as part of the DARPA Media Forensics program.

Vlad explains that a variety of tools already exist to help document and trace the digital manipulation of photos. “File formats contain metadata that can be used to store information about how the image was captured and manipulated. Forensic tools can be used to detect manipulation by examining the noise distribution, strong edges, lighting and other pixel values of a photo. Watermarks can be used to establish original creation of an image.”

Of course, none of these tools perfectly provide a deep understanding of a photo’s authenticity, nor are they practical for every situation. Some are easily defeated; some tools require deep expertise and some lengthy execution and analysis to use properly.

Vlad suspected technologies, such as artificial intelligence and machine learning, could be used to more easily, reliably and quickly detect whether or not any part of a digital image had been manipulated, and if so, what aspects were modified.

Building on research he started fourteen years ago and continued as a Ph.D. student in computer science at the University of Maryland, Vlad describes some of these new techniques in a recent paper — Learning Rich Features for Image Manipulation Detection.

“We focused on three common tampering techniques—splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,” he notes.

Every time an image is manipulated, it leaves behind clues that can be studied to understand how it was altered.  “Each of these techniques tend to leave certain artifacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns,” he says. Although these artifacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes.

Now, what used to take a forensic expert hours to do can be done in seconds. The results of this project are that AI can successfully identify which images have been manipulated. AI can identify the type of manipulation used and highlight the specific area of the photograph that was altered.

“Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation, fusing two distinct techniques together in one network to benefit from their complementary detection capabilities,” Vlad explains.

The first technique uses an RGB stream (changes to red, green and blue color values of pixels) to detect tampering. The second uses a noise stream filter. Image noise is random variation of color and brightness in an image and produced by the sensor of a digital camera or as a byproduct of software manipulation. It looks a little like static. Many photographs and cameras have unique noise patterns, so it is possible to detect noise inconsistencies between authentic and tampered regions, especially if imagery has been combined from two or more photos.

An example of authentic images, manipulated images, the RGB and noise streams used to detect manipulation, and the results of AI analysis. Source: the NC2016 dataset

While these techniques are still being perfected, and do not necessarily solve the problem of “absolute truth” of a photo, they provide more possibility and more options for managing the impact of digital manipulation, and they potentially answer questions of authenticity more effectively.

Vlad notes that future work might explore ways to extend the algorithm to include other artifacts of manipulation, such as differences in illumination throughout a photograph or compression introduced by repeated saving of digital files.

The human factor

Technology alone is not enough to solve an age-old challenge that increasingly confronts us in today’s news environment: What media, if any, can we treat as authentic versions of the truth?

Jon Brandt, senior principal scientist and director for Adobe Research, says that answering that question often comes down to trust and reputation rather than technology. “The Associated Press and other news organizations publish guidelines for the appropriate digital editing of photographs for news media,” he explains.

In other words, when you see a photo on a news site or newspaper, at some level you must trust the chain of custody for that photo, and rely on the ethics of the publisher to refrain from improper manipulation of the image.

The same will be true of newer techniques that are democratizing the ability to manipulate voice and video, he adds, “I think one of the important roles Adobe can play is to develop technology that helps them monitor and verify authenticity as part of their process.

“It’s important to develop technology responsibly, but ultimately these technologies are created in service to society.  Consequently, we all share the responsibility to address potential negative impacts of new technologies through changes to our social institutions and conventions.”

Read more about artificial intelligence in our Human & Machine collection.


Microsoft researchers build a bot that draws what you tell it to

If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.

Now, there’s a bot that can do that, too.

The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.

Continue reading:

The Human Brain Can Create Structures in Up to 11 Dimensions

Neuroscientists have used a classic branch of maths in a totally new way to peer into the structure of our brains. What they’ve discovered is that the brain is full of multi-dimensional geometrical structures operating in as many as 11 dimensions.

We’re used to thinking of the world from a 3-D perspective, so this may sound a bit tricky, but the results of this new study could be the next major step in understanding the fabric of the human brain – the most complex structure we know of.

This latest brain model was produced by a team of researchers from the Blue Brain Project, a Swiss research initiative devoted to building a supercomputer-powered reconstruction of the human brain. . …  <read more>