Jump to content
LaunchBox Community Forums

I think HPE has bumped-up computational speeds by a factor of up to 10,000!


Recommended Posts

Hewlitt Packard Enterprises's ultimate goal is to create computer chips that can compute quickly and make decisions based on probabilities and associations, much like how the brain operates. The chips will use learning models and algorithms to deliver approximate results that can be used in decision-making. While it will take years for such chips to be commercially available, HPE is testing its brain-like computing model through a prototype system with circuit boards and memory chips. The computer, which was shown for the first time at the Discover conference held recently in Las Vegas, is designed to operate in a way that the brain’s neurons and synapses work. Here's where I need the brainiac interpretation (but I think the last sentence in the paragraph below supports the Topic name): Abstract: Vector-matrix multiplication dominates the computation time and energy for many workloads, particularly neural network algorithms and linear transforms (e.g, the Discrete Fourier Transform). Utilizing the natural current accumulation feature of memristor crossbar, we developed the Dot-Product Engine (DPE) as a high density, high power efficiency accelerator for approximate matrix-vector multiplication. We firstly invented a conversion algorithm to map arbitrary matrix values appropriately to memristor conductances in a realistic crossbar array, accounting for device physics and circuit issues to reduce computational errors. The accurate device resistance programming in large arrays is enabled by close-loop pulse tuning and access transistors. To validate our approach, we simulated and benchmarked one of the state-of-the-art neural networks for pattern recognition on the DPEs. The result shows no accuracy degradation compared to software approach (99% pattern recognition accuracy for MNIST data set) with only 4 Bit DAC/ADC requirement, while the DPE can achieve a speed-efficiency product of 1,000x to 10,000x compared to a custom digital ASIC. Click here if ya wanna read the entire "Dot-Product Engine for Neuromorphic Computing: Programming 1T1M Crossbar to Accelerate Matrix-Vector Multiplication" PDF file. Wink
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...