Emergent dynamics of neuromorphic nanowire networks

The human brain is a product of evolution, tuned and reshaped by an ever-changing environment. The brain’s neuronal system is able to achieve the ability to recognize, conceptualize and memorize objects in the physical world. Using environmental information we establish logical associations that ultimately allows us not only to survive, but also to solve highly complex problems1.However, in an increasingly connected and interactive world, the volume of information to process has exponentially increased, and in order to extract and synthesize meaningful information, computerized approaches, such as machine learning and its various incarnations have gained tremendous popularity2.

Typically, Artificial Neural Networks (ANNs) attain this goal by a very delicate and case-selective combination of learning strategies3. Data containing complex or contextual associations between objects normally requires an heuristic sampling which limits their ability to synthesize information. Conventional CMOS architectures also restrains the amount of data that is efficiently processed with ANNs due to power consumption bottlenecks.
Interest in the creation of synthetic neurons that could increase the processing abilities of ANNs has increased considerably with the discovery of nanomaterials with memristive properties4. A memristive device is a non-linear two-terminal device in which the resistance shows resilience to change (i.e. memory), manifested in hysteretic behavior when the energy change is reversed or reduced, also termed as resistive switching. The memristor thus has two important neurosynapse-like properties, plasticity and retention. Traditional integrate-and-fire models, that emulate the electrical behavior of neurons using passive circuit elements, can be simulated exclusively with these elements5,6,7. Memristive devices have been successfully embedded into various CMOS architectures, enabling the realization of synthetic neural networks(SNN). SNNs imitate the topology of an ANN in a physical layout, typically stacking memristive terminals in cross-bar configurations8,9. Using voltage pulses to configure the internal state, or weight, of individual memristors; memorization, learning and classification abilities have been achieved10,11,12,13. However promising, this approach remains reliant upon CMOS technology and inherits some of its limitations: large cost-efficiency ratio, high power consumption, and subpar performance with respect to computerized ANNs …..

Figure 1

Morphological and structural properties of PVP-coated Ag nanowires and nanowire network. (a) Optical micrograph image of nanowire network layout after drop-cast deposition on a SiO2 substrate. (b) SEM image of nanowire interconnectivity in a selected area of the network. (c) HR-TEM image showing the atomic planes of the [100] facet of a Ag nanowire with the nanometric PVP layer embedded on the lateral surface of the nanowire. Figures (d,e) sketch the detail of the insulating junctions formed by the polymeric PVP layer between the Ag surfaces of overlapping nanowires. (f) Scheme of the measurement system. Two tungsten probes, separated by distance d = 500 μm, act as electrodes, contacting the nanowire network deposited on SiO2. The scale bars for figures (ac) are 100 μm, 10 μm and 2 nm, respectively.

Read full posthttps://www.nature.com/articles/s41598-019-51330-6

Preana: Game Theory Based Prediction with Reinforcement Learning.

In this article, we have developed a game theory based prediction tool, named Preana, based on a promising model developed by Professor Bruce Beuno de Mesquita. The first part of this work is dedicated to exploration of the specifics of Mesquita’s algorithm and reproduction of the factors and features that have not been revealed in literature. In addition, we have developed a learning mechanism to model the players’ reasoning ability when it comes to taking risks. Preana can pre-dict the outcome of any issue with multiple steak-holders who have conflicting interests in eco-nomic, business, and political sciences. We have utilized game theory, expected utility theory, Me-dian voter theory, probability distribution and reinforcement learning. We were able to repro-duce Mesquita’s reported results and have included two case studies from his publications and compared his results to that of Preana. We have also applied Preana on Irans 2013 presidential election to verify the accuracy of the prediction made by Preana.


Blue Brain team discovers a multi-dimensional universe in brain networks

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

Blue Brain team discovers a multi-dimensional universe in brain networks

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a networkwith as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.


How to predict the side effects of millions of drug combinations.

Doctors have no idea, but Stanford University computer scientists have figured it out, using artificial intelligence

July 11, 2018

An example graph of polypharmacy side effects derived from genomic and patient population data, protein–protein interactions, drug–protein targets, and drug–drug interactions encoded by 964 different polypharmacy side effects. The graph representation is used to develop Decagon. (credit: Marinka Zitnik et al./Bioinformatics)

Millions of people take up to five or more medications a day, but doctors have no idea what side effects might arise from adding another drug.*

Now, Stanford University computer scientists have developed a deep-learning system (a kind of AI modeled after the brain) called Decagon** that could help doctors make better decisions about which drugs to prescribe. It could also help researchers find better combinations of drugs to treat complex diseases.

The problem is that with so many drugs currently on the U.S. pharmaceutical market, “it’s practically impossible to test a new drug in combination with all other drugs, because just for one drug, that would be five thousand new experiments,” said Marinka Zitnik, a postdoctoral fellow in computer science and lead author of a paper presented July 10 at the 2018 meeting of the International Society for Computational Biology.

With some new drug combinations (“polypharmacy”), she said, “truly we don’t know what will happen.”

How proteins interact and how different drugs affect these proteins

So Zitnik and associates created a network describing how the more than 19,000 proteins in our bodies interact with each other and how different drugs affect these proteins. Using more than 4 million known associations between drugs and side effects, the team then designed a method to identify patterns in how side effects arise, based on how drugs target different proteins, and also to infer patterns about drug-interaction side effects.***

Based on that method, the system could predict the consequences of taking two drugs together.

To evaluate the The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub. system, the group looked to see if its predictions came true. In many cases, they did. For example, there was no indication in the original data that the combination of atorvastatin (marketed under the trade name Lipitor among others), a cholesterol drug, and amlopidine (Norvasc), a blood-pressure medication, could lead to muscle inflammation. Yet Decagon predicted that it would, and it was right.

In the future, the team members hope to extend their results to include more multiple drug interactions. They also hope to create a more user-friendly tool to give doctors guidance on whether it’s a good idea to prescribe a particular drug to a particular patient, and to help researchers developing drug regimens for complex diseases, with fewer side effects.

Ref.: Bioinformatics (open access). Source: Stanford University.

* More than 23 percent of Americans took three or more prescription drugs in the past 30 days, according to a 2017 CDC estimate. Furthermore, 39 percent over age 65 take five or more, a number that’s increased three-fold in the last several decades. There are about 1,000 known side effects and 5,000 drugs on the market, making for nearly 125 billion possible side effects between all possible pairs of drugs. Most of these have never been prescribed together, let alone systematically studied, according to the Stanford researchers.

** In geometry, a decagon is a ten-sided polygon.

*** The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub.

source:   KurzweilAi.net , Stanford.edu