Deep neural network chip from Intel®

Prototype and deploy deep neural network (DNN) applications smarter and more efficiently with a tiny, fanless, deep learning development kit designed to enable a new generation of intelligent devices.

The new, improved Intel® Neural Compute Stick 2 (Intel® NCS 2) features Intel’s latest high-performance vision processing unit: the Intel® Movidius™ Myriad™ X VPU. With more compute cores and a dedicated hardware accelerator for deep neural network inference, the Intel® NCS 2 delivers up to eight times the performance boost compared to the previous generation Intel® Movidius™ Neural Compute Stick (NCS).

Technical Specifications

  • Processor: Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU)
  • Supported frameworks: TensorFlow* and Caffe*
  • Connectivity: USB 3.0 Type-A
  • Dimensions: 2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)
  • Operating temperature: 0° C to 40° C
  • Compatible operating systems: Ubuntu* 16.04.3 LTS (64 bit), CentOS* 7.4 (64 bit), and Windows® 10 (64 bit)

source: https://software.intel.com/en-us/neural-compute-stick

Google AI platform like a Raspberry Pi

Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

This new hardware was launched ahead of the TensorFlow Dev Summit, revolving around machine learning and ‘AI’ in embedded applications, specifically power- and computationally-limited environments. This is ‘the edge’ in marketing speak, and already we’ve seen a few products designed from the ground up to run ML algorithms and inference in embedded applications. There are RISC-V microcontrollers with machine learning accelerators available now, and Nvidia has been working on this for years. Now Google is throwing their hat into the ring with a custom-designed ASIC that accelerates TensorFlow. It just so happens that the board looks like a Raspberry Pi.

WHAT’S ON THE BOARD

On board the Coral dev board is an NXP i.MX 8M SOC with a quad-core Cortex-A53 and a Cortex-M4F. The GPU is listed as ‘Integrated GC7000 Lite Graphics’. RAM is 1 GB of LPDDR4, Flash is provided with 8GB of eMMC, and WiFi and Bluetooth 4.1 are included. Connectivity is provided through USB, with Type-C OTG, a Type-C power connection, a Type-A 3.0 host, and a micro-B serial console. Gigabit Ethernet, a 3.5mm audio jack, a microphone, full-size HDMI, 4-lane MIPI-DSI, and 4-lane MIPI-CSI2 camera support. The GPIO pins are exactly — and I mean exactly — like the Raspberry Pi GPIO pins. The GPIO pins provide the same signals in the same places, although due to the different SOCs, you will need to change a line or two of code defining the pin numbers.

You might be asking why Google would build a Raspberry Pi clone. That answer comes in the form of a machine learning accelerator chip implanted on the board. Machine learning and AI chips were popular in the 80s and everything old is new again, I guess. The Google Edge TPU coprocessor has support for TensorFlow Lite, or ‘machine learning at the edge’. The point of TensorFlow Lite isn’t to train a system, but to run an existing model. It’ll do facial recognition.

The Coral dev board is available for $149.00, and you can order it on Mouser. As of this writing, there are 1320 units on order at Mouser, with a delivery date of March 6th (search for Mouser part number 212-193575000077).

source: https://hackaday.com/2019/03/05/google-launches-ai-platform-that-looks-remarkably-like-a-raspberry-pi/ 

SpiNNaker, the Million-Core Supercomputer, Finally Switched On

After 12 years in the making, the “brain computer” designed at the University of Manchester is finally switched on. What does this computer do? How is it made? And who is Steve Furber?

AI systems have been rapidly developed in the past decade with the use of deep learning, neural networks, and large computers to try and simulate neurons. But AI is not the only area of interest when using such techniques; scientists and engineers alike are also keen to try and simulate the human brain to better understand how it works and why.

Simulating the brain is no trivial task. The complexity of the human brain is difficult to replicate, which is part of why the SpiNNaker computer is important.

The Challenges of Simulating a Brain

One of the first fundamental differences between the brain and computers is how their “smallest units” function. Brain neurons can have multiple connections and react to impulses in a range of different ways. Computer transistors, by comparison, are switches that, while can be connected to other transistors, can only have one of two states.

Neurons are also able to forge links between other neurons and react to stimuli differently (which is one definition of “learning”), whereas transistor connections are fixed.

Because of these differences, scientists have to “simulate” neurons and connections in software rather than in hardware, which severely impacts the number of neurons and links that can be simulated simultaneously.

What about simulation neurons in hardware?

Neurons and transistors share little in common but a better comparison would be simple microcontrollers and FPGAs; microcontrollers are akin to neurons in that they can process outside signals quickly while being comparatively simple in architecture while FPGAs provide the ability to break and create connections between microcontrollers.

Could hardware simulation be the key? One team of researchers believes so and has spent the last 12 years on the idea.

The SpiNNaker

A research team at the University of Manchester have spent the last 12 years creating a computer that will simulate neurons and connections with the use of many simple cores all interconnected on a massive parallel system and the computer, called SpiNNaker, was finally turned on.

The million-core computer is designed to simulate up to a billion neurons in real-time to allow scientists to study neural networks and pathways in a realistic manner by using hardware as opposed to software.

Unlike traditional methods for simulating neurons, SpiNNaker has individual processors that each simulate up to 1000 neurons that transmit and receive small packets of data to and from many other neurons simultaneously.

Hexagonal topology between processors and a 48-processor SpiNNaker 
computer - Image courtesy University of Manchester

The Spiking Neural Network Architecture system (SpiNNaker) consists of 10 19-inch computer racks with each rack containing 100,000 ARM cores. This core density is achieved with the use of a custom IC that contain up to 18 cores. Each board in a rack has 48 chips, which results in each board containing 864 processors.

Unlike typical software systems, the cores are arranged in a hexagonal pattern with data transmission handled entirely in hardware. It is this topology that allows for the system to simulate one billion neurons in real-time. The system uses ARM9 processors containing a total of 7TB of RAM and 57K nodes while each processor has an off-die 128MB SDRAM and each core has 32KB ROM and 64KB data tightly-coupled memory DTCM …

https://www.allaboutcircuits.com/news/simulate-human-brain-spinnaker-million-core-computer-switched-on/

https://www.research.manchester.ac.uk/portal/files/60826558/FULL_TEXT.PDF

3D-printed Deep Learning neural network uses light instead of electrons

It’s a novel idea, using light diffracted through numerous plates instead of electrons. And to some, it might seem a little like replacing a computer with an abacus, but researchers at UCLA have high hopes for their quirky, shiny, speed-of-light artificial neural network.

Coined by Rina Dechter in 1986, Deep Learning is one of the fastest-growing methodologies in the machine learning community and is often used in face, speech and audio recognition, language processing, social network filtering and medical image analysis as well as addressing more specific tasks, such as solving inverse imaging problems.

Traditionally, deep learning systems are implemented on a computer to learn data representation and abstraction and perform tasks, on par with or better than – the performance of humans. However the team led by Dr. Aydogan Ozcan, the Chancellor’s Professor of electrical and computer engineering at UCLA, didn’t use a traditional computer set-up, instead choosing to forgo all those energy-hungry electrons in favor of light waves. The result was its all-optical Diffractive Deep Neural Network (D2NN) architecture.

The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)

The setup uses 3D-printed translucent sheets, each with thousands of raised pixels, which deflect light through each panel in order to perform set tasks. By the way, these tasks are performed without the use of any power, except for the input light beam.

The UCLA team’s all-optical deep neural network – which looks like the guts of a solid gold car battery – literally operates at the speed of light, and will find applications in image analysis, feature detection and object classification. Researchers on the team also envisage possibilities for D2NN architectures performing specialized tasks in cameras. Perhaps your next DSLR might identify your subjects on the fly and post the tagged image to your Facebook timeline.

The D2NN was trained to recognize handwritten numerals

view gallery

“Using passive components that are fabricated layer by layer, and connecting these layers to each other via light diffraction created a unique all-optical platform to perform machine learning tasks at the speed of light,” said Dr. Ozcan.

For now though, this is a proof of concept, but it shines a light on some unique opportunities for the machine learning industry.

The research has been published in the journal Science.

[Sources]
https://newatlas.com/diffractive-deep-neural-network-uses-light-to-learn/55718/
http://innovate.ee.ucla.edu/
https://arxiv.org/abs/1804.08711