Articles tagged with "neuromorphic-computing"
Brain-inspired neuromorphic computer uses tiny LEDs to cut AI energy use
German researchers from Technische Universität Braunschweig, Leibniz University Hannover, Ostfalia University of Applied Sciences, and PTB are developing a neuromorphic computer called BRIGHT that uses microscopic LEDs instead of traditional silicon transistors to drastically reduce AI energy consumption. Funded with USD 17.6 million by the state of Lower Saxony and the Volkswagen Foundation, the project aims to create a brain-inspired architecture that enables massively parallel signal processing, mimicking neuronal communication while consuming only a fraction of the power used by conventional AI hardware. A working demonstrator has already been built at the LENA research center, with plans to optimize optical connections, LED components, and hybrid chip integration over the next five years. The innovation combines silicon-based CMOS circuits with gallium nitride LED devices in a hybrid system, merging logic/control and efficient light emission technologies. This approach supports neuromorphic computing by implementing neural networks directly in hardware rather than simulating them digitally, offering a more energy-efficient alternative to
energyneuromorphic-computingLEDsmicroelectronicsAI-energy-efficiencygallium-nitridehybrid-integrationNew multi-physics AI architecture boosts computing speed, efficiency
Chinese researchers at Peking University have developed a novel multi-physics computing architecture that significantly enhances processing speed and efficiency, achieving nearly a fourfold increase in performance. By integrating two innovative devices optimized for frequency generation, modulation, and in-memory computing, the system effectively matches frequency conversion across multiple physical domains—such as electrical current, charge, and light. This versatile architecture excels at complex operations like the Fourier Transform, a fundamental technique for converting signals into frequency-domain representations widely used in science and engineering. The new system boosts Fourier Transform processing speeds from approximately 130 billion to 500 billion operations per second while maintaining accuracy and reducing power consumption. This advancement addresses the limitations of traditional digital computing architectures, which struggle to meet the increasing demands of AI workloads. The Peking University team’s approach aligns with a broader global trend toward specialized computing paradigms—including neuromorphic, photonic, and analog architectures—that optimize specific mathematical functions to improve speed and energy efficiency. By enabling computations to run in their most efficient physical
AIcomputing-architectureenergy-efficiencyneuromorphic-computingphotonic-computingin-memory-computingroboticsWorld’s first neuromorphic supercomputer nears reality at US lab
Researchers at Sandia National Laboratories (SNL) have made significant progress toward creating the world’s first neuromorphic supercomputer by developing a novel algorithm that enables neuromorphic hardware to solve partial differential equations (PDEs). This breakthrough allows such hardware—designed to mimic the brain’s neural networks—to perform complex mathematical simulations like fluid dynamics and structural mechanics with far greater energy efficiency than conventional supercomputers. Traditionally, neuromorphic computers were thought to excel mainly at pattern recognition and neural network training, but this new work demonstrates their potential for large-scale scientific computations, which are typically resource-intensive. The algorithm, inspired by the structure and dynamics of cortical brain networks, bridges a previously unexplored connection between neuromorphic circuits and PDEs, opening avenues for advanced applied mathematics on this platform. Beyond mathematical modeling, the research holds promise for understanding brain diseases by framing them as computational disorders, potentially offering new insights into conditions such as Alzheimer’s and Parkinson’s. Additionally, the technology could drastically reduce
energyneuromorphic-computingsupercomputerartificial-intelligenceenergy-efficiencycomputational-neurosciencebrain-inspired-computingThe brain may be the blueprint for the next computing frontier
The article discusses the rapid advancement of neuromorphic computing, a technology that models hardware on the brain’s neurons and spiking activity to achieve highly energy-efficient and low-latency data processing. Unlike traditional deep neural networks (DNNs) that rely on continuous numeric activations and consume significant power, spiking neural networks (SNNs) use asynchronous, event-driven spikes inspired by biological neurons. This approach enables dramatic reductions in energy use and processing time; for instance, Intel’s Loihi chips reportedly perform AI inference 50 times faster and with 100 times less energy than conventional CPUs and GPUs, while IBM’s TrueNorth chip achieves unprecedented energy efficiency at 400 billion operations per second per watt. However, SNNs currently face challenges in accuracy and training tool maturity compared to traditional AI models. The global race to develop neuromorphic hardware is intensifying, with major players like Intel and IBM in the US leading early efforts through chips such as Loihi and TrueNorth, and startups
energyneuromorphic-computingspiking-neural-networksAI-chipsbrain-inspired-hardwareenergy-efficiencyedge-computingNew brain-like computer could bring self-learning AI to devices
Engineers at The University of Texas at Dallas, led by Dr. Joseph S. Friedman, have developed a small-scale brain-inspired computer prototype that learns and processes information more like the human brain. Unlike traditional AI systems, which separate memory and processing and require extensive training with large labeled datasets, this neuromorphic hardware integrates memory and computation, enabling it to recognize patterns and make predictions with significantly fewer training computations and lower energy consumption. The design is based on Hebb’s law, where connections between artificial neurons strengthen when they activate together, allowing continuous self-learning. The prototype uses magnetic tunnel junctions (MTJs)—nanoscale devices with two magnetic layers separated by an insulator—that adjust their connectivity dynamically as signals pass through, mimicking synaptic changes in the brain. MTJs also provide reliable binary data storage, overcoming limitations seen in other neuromorphic approaches. Dr. Friedman aims to scale up this technology to handle more complex tasks, potentially enabling smart devices like phones and wearables to run
neuromorphic-computingbrain-inspired-AImagnetic-tunnel-junctionsenergy-efficient-AIedge-computingself-learning-AIsmart-devicesIon-based artificial neurons mimic brain chemistry for AI computing
Researchers at USC have developed artificial neurons that physically replicate the electrochemical behavior of real brain cells, marking a significant advancement toward more efficient, brain-like AI hardware. Unlike conventional neuromorphic chips that digitally simulate brain activity, these new neurons utilize actual chemical and electrical processes, specifically relying on the movement of silver ions within a “diffusive memristor” structure. This approach mimics the brain’s natural signaling, where electrical signals convert to chemical signals at synapses and back again, enabling each artificial neuron to occupy the space of just one transistor—dramatically reducing size and potentially increasing speed and efficiency. The innovation addresses a key limitation of current computing systems: energy inefficiency. While modern computers are powerful, they consume excessive energy and lack the efficiency of the human brain, which learns from few examples using only about 20 watts of power. By leveraging ion dynamics rather than electron flow, the USC team aims to create hardware that supports more efficient, hardware-based learning akin to biological brains. Although the
artificial-neuronsneuromorphic-computingion-based-computingenergy-efficiencyAI-hardwarememristor-technologybrain-inspired-computingMercedes Vision Iconic merges classic form with smart tech future
Mercedes-Benz has unveiled the Vision Iconic concept at its Shanghai design studio, previewing the aesthetic and technological direction of the next-generation S-Class due in 2028. This two-door coupe fuses 1930s Art Deco-inspired styling with cutting-edge features such as neuromorphic computing, solar paint technology, and Level 4 autonomous driving. Its design recalls classic Mercedes models through elements like a long, sculpted body, brass accents, and an illuminated grille inspired by historic grilles from the W 108, W 111, and 600 Pullman, while integrating modern electric-era lighting. The interior combines luxurious materials like blue velvet upholstery and handcrafted marquetry with advanced digital interfaces, blending traditional craftsmanship with futuristic technology. Technologically, the Vision Iconic incorporates a neuromorphic computing system that processes data far more efficiently than conventional CPUs, enabling precise recognition of pedestrians, road signs, and obstacles. Its Level 4 autonomy allows the vehicle to self-drive on mapped routes with enhanced maneuver
energyautonomous-vehiclesneuromorphic-computingsolar-paintelectric-vehiclesautomotive-technologysustainabilityWorld's largest-scale brain-like computer with 2 billion neurons unveiled
Chinese engineers at Zhejiang University and Zhejiang Lab have unveiled "Darwin Monkey," the world’s largest-scale brain-like neuromorphic computer, designed to mimic the macaque monkey brain. The system integrates 960 third-generation Darwin 3 neuromorphic computing chips across 15 blade-style servers, supporting over 2 billion spiking neurons and more than 100 billion synapses. This neuron count approaches that of a macaque brain, enabling advanced cognitive functions such as vision, hearing, language, learning, logical reasoning, content generation, and mathematical problem-solving. The Darwin 3 chips feature specialized brain-inspired instruction sets and an online neuromorphic learning mechanism, marking a significant technological breakthrough in brain-inspired computing and operating systems. Consuming approximately 2,000 watts during typical operation, Darwin Monkey represents the first neuromorphic brain-like computer based on dedicated neuromorphic chips. The system can run large brain-like models such as DeepSeek, demonstrating its capacity for complex intelligent applications. This development follows similar
materialsneuromorphic-computingbrain-like-computerneural-processing-unitsadvanced-chipsenergy-consumptionartificial-intelligenceCan an AI chip that mimics the brain beat the data deluge?
The article discusses BrainChip’s Akida processor, a neuromorphic AI chip inspired by the brain’s energy-efficient event-driven processing. Unlike traditional AI chips that process every data frame regardless of changes, Akida leverages spiking neural networks to compute only when input signals exceed a threshold, significantly reducing redundant calculations. This approach exploits data sparsity by processing only changes between frames, leading to power savings of up to 100 times in scenarios with minimal activity, such as a static security camera feed. However, in highly dynamic scenes with frequent changes, these savings diminish. Akida’s architecture uses a digital implementation of spiking neural networks, employing activation functions like ReLU to trigger computations selectively. This mimics biological neurons that fire only when stimulated beyond a threshold, enabling progressively fewer computations across network layers. Despite these efficiency gains, neuromorphic chips like Akida remain niche due to limitations such as 8-bit precision constraints and gaps in development tooling. While promising for edge devices constrained by power,
AI-chipneuromorphic-computingenergy-efficiencyedge-devicesIoT-sensorsbrain-inspired-technologylow-power-AIRobots get brain-like navigation to run for days using 90% less power
Researchers at the QUT Centre for Robotics have developed a brain-inspired robot navigation system called Locational Encoding with Neuromorphic Systems (LENS) that operates using less than 10% of the energy required by conventional navigation systems. By mimicking the human brain’s efficient processing, LENS uses specialized algorithms that process information as electrical spikes, similar to neuronal signals. This neuromorphic computing approach drastically reduces the energy consumption for visual localization by up to 99%, enabling robots to operate longer and travel further on limited power supplies. The system demonstrated effective location recognition along an 8 km route while requiring only 180KB of storage, which is about 300 times smaller than traditional systems. LENS achieves its efficiency through a combination of advanced technologies, including an event camera that detects pixel-level brightness changes continuously rather than capturing full images, closely replicating human visual processing. This “movement-focused” data is then processed by a spiking neural network on a low-power chip within a compact system. Such
robotenergy-efficiencyneuromorphic-computingautonomous-navigationspiking-neural-networksevent-cameralow-power-roboticsBrain-like thinking AI chip with 100x less energy use developed
energyAI-chipneuromorphic-computingenergy-efficiencycybersecurityon-device-processingpattern-recognition