Articles tagged with "semiconductor-technology"
Molecular electronics could offer 1000x more density than silicon chips
The article discusses the emerging field of molecular electronics as a potential successor to traditional silicon-based chips, which are approaching physical and economic limits. Current leading-edge chips, such as Apple’s A17 Pro built on TSMC’s 3 nm process, face challenges like electron tunneling causing leakage and excessive heat, alongside the prohibitive cost of advanced fabrication facilities exceeding $20 billion. Molecular electronics proposes using individual molecules as functional electronic components, exploiting quantum properties like directional electron flow and quantum interference to achieve device densities up to 10¹⁴ per square centimeter—about 1,000 times greater than silicon chips. Molecular electronics operates on fundamentally different principles, with charge transport occurring via quantum tunneling across molecular junctions. The conductance depends heavily on molecular length and configuration, with benzene-based molecules demonstrating constructive or destructive quantum interference based on connection geometry, enabling novel electronic behaviors. Creating reliable molecular junctions requires electrodes spaced under 3 nanometers, achieved through techniques such as electromigration, self
materialsmolecular-electronicsnanotechnologysemiconductor-technologyquantum-tunnelingchip-fabricationnanoelectronicsFrom invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing
Neurophos, an Austin-based startup, has developed a groundbreaking “metasurface modulator” that enables tiny optical processors capable of performing matrix-vector multiplication—an essential operation for AI inferencing—much faster and more efficiently than traditional silicon-based GPUs and TPUs. By miniaturizing optical transistors to a scale about 10,000 times smaller than conventional optical components, Neurophos can fit thousands of these modulators on a single chip, significantly boosting computational speed and energy efficiency. This innovation addresses key challenges in photonic computing, such as large component size and high power consumption due to digital-analog conversions, positioning Neurophos’s optical processing units (OPUs) as a promising alternative to silicon chips in AI data centers. The company recently raised $110 million in a Series A funding round led by Bill Gates’ venture firm Gates Frontier, with participation from Microsoft’s M12 and other investors. CEO Dr. Patrick Bowen claims that Neurophos’s OPUs will outperform Nvidia
materialsoptical-processorsAI-chipsphotonic-chipsenergy-efficiencymetasurface-modulatorsemiconductor-technologyQuadric rides the shift from cloud AI to on-device inference — and it’s paying off
Quadric, a chip-IP startup founded by veterans of bitcoin mining firm 21E6, is capitalizing on the growing demand for on-device AI inference as companies and governments seek to reduce cloud infrastructure costs and enhance sovereign AI capabilities. Originally focused on automotive applications like driver assistance, Quadric has expanded into laptops, industrial devices, and other markets, leveraging its programmable AI processor IP that customers can embed into their own silicon. This approach, combined with a software stack and toolchain for running models locally, has driven significant growth: Quadric’s licensing revenue surged from about $4 million in 2024 to $15–20 million in 2025, with a target of $35 million in 2026, boosting its valuation to $270–300 million. The shift toward on-device AI is fueled by the rise of transformer-based models and the increasing cost and complexity of centralized AI infrastructure. Quadric’s chip-agnostic technology supports distributed AI setups where inference runs locally on devices such as
IoTAI-inferenceon-device-AIchip-IPautomotive-AIedge-computingsemiconductor-technologyElon Musk says Tesla’s restarted Dojo3 will be for ‘space-based AI compute’
Elon Musk announced that Tesla plans to restart development of its third-generation AI chip, Dojo3, but with a new focus on “space-based AI compute” rather than training self-driving models on Earth. This marks a strategic shift following Tesla’s shutdown of the original Dojo supercomputer project five months earlier, which included disbanding the Dojo team after the departure of its lead, Peter Bannon. At that time, Tesla had intended to rely more on external partners like Nvidia, AMD, and Samsung for AI compute and chip manufacturing. However, Musk’s recent statements suggest a renewed commitment to in-house chip development, highlighting that Tesla’s AI5 chip design is progressing well and that the upcoming AI7/Dojo3 chip will be geared toward operating AI data centers in space. Musk’s vision aligns with broader industry discussions about the limitations of Earth’s power grids and the potential benefits of off-planet data centers powered by constant solar energy. Tesla aims to rebuild its Dojo team
AI-chipsTesla-Dojospace-based-computingenergy-harvestingsemiconductor-technologyautonomous-drivingAI-hardwareChina's supercooled radar chips may boost stealth jet detection by 40%
Chinese researchers at Xidian University have developed a novel supercooling technique for gallium nitride (GaN) semiconductor chips that could enhance military radar performance by approximately 40%. The innovation addresses a critical limitation in high-power electronics—heat buildup—by improving the thermal management at the materials level. By precisely controlling the growth of the bonding layer inside the chip, the team reduced thermal resistance by about one-third, enabling more efficient heat dissipation and power handling. This advancement allows radar systems operating in the X and Ka frequency bands to transmit stronger signals and detect weaker echoes without increasing size or weight, benefiting both military and civilian applications such as advanced aircraft radars and next-generation wireless networks. The breakthrough has significant strategic implications, particularly for China’s stealth aircraft like the J-20 and J-35, which already use GaN-based radars with longer detection ranges than older systems. In contrast, U.S. stealth platforms like the F-22 rely on older radar technology, and upgrades to Ga
materialsenergysemiconductor-technologygallium-nitrideradar-systemschip-coolinghigh-power-electronicsOpenAI signs deal, worth $10 billion, for compute from Cerebras
OpenAI has entered a multi-year agreement with AI chipmaker Cerebras, securing 750 megawatts of compute power from 2026 through 2028 in a deal valued at over $10 billion. This partnership aims to accelerate AI processing speeds, enabling faster response times for OpenAI’s customers by leveraging Cerebras’s specialized AI chips, which the company claims outperform traditional GPU-based systems like those from Nvidia. The enhanced compute capacity is expected to support real-time AI inference, which Cerebras CEO Andrew Feldman likens to the transformative impact broadband had on the internet. Cerebras, which gained prominence following the AI surge sparked by ChatGPT’s 2022 launch, has been expanding despite postponing its IPO multiple times. The company is reportedly in talks to raise an additional $1 billion at a $22 billion valuation. OpenAI’s strategy involves diversifying its compute infrastructure to optimize performance across different workloads, with Cerebras providing a dedicated low-latency inference solution. This collaboration is
energyAI-chipscompute-powerdata-centershigh-performance-computingsemiconductor-technologyAI-infrastructureNvidia acquires AI chip challenger Groq for $20B, report says
Nvidia is reportedly acquiring AI chip startup Groq for $20 billion, as competition intensifies among tech companies to enhance their AI computing capabilities. While Nvidia’s GPUs have become the industry standard for AI processing, Groq has developed a distinct type of chip known as a language processing unit (LPU), which claims to be ten times faster and consume one-tenth the energy compared to traditional solutions. Groq’s CEO, Jonathan Ross, has a background in innovation, having contributed to Google’s chip development efforts. Groq has experienced rapid growth, recently raising funds at a $6.9 billion valuation and expanding its user base to over 2 million developers, up from approximately 356,000 the previous year. The acquisition would strengthen Nvidia’s position in the AI hardware market by integrating Groq’s advanced chip technology. Nvidia has not yet provided an official comment on the reported deal.
energyAI-chipsNvidiaGroqsemiconductor-technologylanguage-processing-unitcomputing-powerChina’s light-based AI chips beat NVIDIA GPUs at some tasks by 100x
Chinese researchers have developed new photonic (light-based) AI chips, such as ACCEL and LightGen, that reportedly outperform NVIDIA’s GPUs by over 100 times in speed and energy efficiency for specific generative AI tasks like video production, image synthesis, and low-light vision. Unlike traditional NVIDIA GPUs, which use electrons flowing through transistors to execute flexible, general-purpose computations, these photonic chips perform preset analog mathematical operations via optical interference. This approach enables extremely fast and power-efficient processing but limits their flexibility and applicability to narrowly defined AI workloads. ACCEL, developed by Tsinghua University, is a hybrid chip combining photonic and analog electronic components, capable of delivering 4.6 PetaFLOPS while consuming minimal power. LightGen, created by a collaboration between Shanghai Jiao Tong University and Tsinghua University, is a fully optical chip with over 2 million photonic neurons, excelling in tasks like image generation, style transfer, and 3D image manipulation.
materialsphotonic-chipsAI-hardwaresemiconductor-technologyenergy-efficiencyoptical-computingmicrochipsUnconventional AI confirms its massive $475M seed round
Unconventional AI, a startup founded by Naveen Rao, former head of AI at Databricks, has secured $475 million in seed funding at a $4.5 billion valuation. The funding round was led by Andreessen Horowitz and Lightspeed Ventures, with additional investments from Lux Capital and DCVC. This initial raise is part of a larger planned round that could reach up to $1 billion. Although the final valuation is slightly below the $5 billion Rao initially aimed for, the company’s value may increase if the full funding target is met. The startup aims to develop a new, energy-efficient computer specifically designed for AI applications, with Rao emphasizing a goal to achieve efficiency comparable to biological systems. Rao has a strong track record in AI and machine learning startups, having previously founded MosaicML, acquired by Databricks for $1.3 billion, and another machine learning platform acquired by Intel for over $400 million. Unconventional AI’s ambitious funding and vision position it as
energyAI-hardwareenergy-efficient-computingstartup-fundingsemiconductor-technologymachine-learningcomputer-architectureAndy Jassy says Amazon’s Nvidia competitor chip is already a multi-billion-dollar business
Amazon CEO Andy Jassy announced at the AWS Re:Invent conference that the company’s AI chip business, centered on its Nvidia competitor Trainium, is already a multi-billion-dollar revenue run-rate enterprise. The current generation, Trainium2, boasts over one million chips in production and is used by more than 100,000 companies, powering the majority of usage on Amazon’s AI app development platform, Bedrock. Jassy emphasized that Trainium2 offers compelling price-performance advantages over other GPUs, making it a popular choice among AWS’s extensive cloud customer base. A significant portion of Trainium2’s revenue comes from Anthropic, a key AWS partner using over 500,000 Trainium2 chips in Project Rainier, Amazon’s large-scale AI server cluster designed to support Anthropic’s advanced model training needs. While other major AI players like OpenAI also use AWS, they primarily rely on Nvidia chips, underscoring the challenge of competing with Nvidia’s entrenched GPU technology and proprietary CUDA software
energyAI-chipscloud-computingsemiconductor-technologyAmazon-TrainiumNvidia-competitordata-centersIntel expands Panther Lake processor edge applications to robotics - The Robot Report
Intel has unveiled detailed architectural information about its Intel Core Ultra Series 3 processor, codenamed Panther Lake, highlighting its expanded edge applications including robotics. To support this, Intel introduced a new Robotics AI software suite and reference board designed to help customers rapidly develop cost-effective robots with advanced AI capabilities for control and perception. Panther Lake, Intel’s first product built on the cutting-edge 18A semiconductor process, is set to begin high-volume production in 2024 at Intel’s new Fab 52 facility in Chandler, Arizona, with initial shipments expected by the end of the year and broad availability in January 2026. The Panther Lake processor leverages Intel’s 18A process, the most advanced semiconductor technology developed and manufactured in the U.S., featuring innovations such as RibbonFET transistor architecture and PowerVia backside power delivery. The processor offers a scalable multi-chiplet design, combining up to 16 performance and efficient cores, a new Intel Arc GPU with up to 12 Xe cores, and
roboticsIntel-Panther-LakeAI-processorssemiconductor-technologyedge-computingAI-accelerationadvanced-manufacturingIntel unveils new processor powered by its 18A semiconductor tech
Intel has unveiled its next-generation Intel Core Ultra processor, codenamed Panther Lake, marking a significant hardware upgrade powered by the company’s new 18A semiconductor process. This chip, the first built using the 18A technology, is expected to ship later in 2025 and is manufactured at Intel’s Fab 52 facility in Chandler, Arizona, which began operations in 2024. Intel CEO Lip-Bu Tan emphasized that this advancement signals a new era in computing, driven by breakthroughs in semiconductor technology, manufacturing, and packaging, aligning with his vision to revitalize Intel’s engineering culture and innovation. In addition to Panther Lake, Intel previewed its Xeon 6+ server processor, codenamed Clearwater Forest, also based on the 18A process, with a planned launch in the first half of 2026. This announcement represents Intel’s largest manufacturing milestone in years and highlights the strategic importance of domestic chip production. Intel’s press release underscored that the 18A
materialssemiconductor-technologyIntel-processors18A-processchip-manufacturingadvanced-packagingcomputing-innovationScientists create quantum 'telephones' to connect long-distance atoms
Researchers at the University of New South Wales (UNSW) in Australia have successfully created quantum entanglement between two distant phosphorus atoms embedded in silicon, marking a significant advancement in quantum computing. Using electrons as a bridge, they established entangled states between the nuclear spins of atoms separated by up to 20 nanometers. This breakthrough was demonstrated through a two-qubit controlled-Z logic operation, achieving a nuclear Bell state with a fidelity of approximately 76% and a concurrence of 0.67. The findings, published in the journal Science, suggest that nuclear spin-based quantum computers can be developed using existing silicon technology and manufacturing processes. The key innovation lies in using electrons—capable of “spreading out” in space—to mediate communication between atomic nuclei that were previously isolated like people in soundproof rooms. By enabling these nuclei to “talk” over a distance via electron exchange interactions, the researchers effectively created quantum “telephones” that allow long-distance entanglement. This method is robust
quantum-computingsilicon-microchipsquantum-entanglementsemiconductor-technologyspin-qubitsnuclear-spinquantum-communicationEngineers use electric fields to form circuits beyond silicon limits
Researchers have developed a novel method to fabricate atomically thin logic circuits using two-dimensional (2D) semiconductors, addressing the limitations of traditional silicon-based transistor scaling. Conventional silicon fabrication struggles at nanoscale dimensions due to electrical interference, leakage, and complex manufacturing, prompting exploration of alternative materials like molybdenum disulfide (MoS₂) and tungsten diselenide (WSe₂). These 2D materials offer efficient charge transport and tunable transistor types but have been difficult to integrate into circuits at scale because existing methods rely on high temperatures, vacuum environments, or manual placement, which hinder consistent, large-scale production. The new approach combines solution-based electrochemical exfoliation to produce large, stable 2D nanosheets with electric-field-guided assembly to precisely position n-type MoS₂ and p-type WSe₂ between electrodes without lithography or high-temperature steps. Electrochemical exfoliation uses voltage to insert ions between crystal layers, gently separating them into micron-scale nanosheets suspended
materials2D-semiconductorselectric-field-assemblynanosheetstransistor-fabricationadvanced-materialssemiconductor-technologyUS engineers build transistor-like switch for quantum excitons
University of Michigan engineers have developed the first transistor-like switch that can control the flow of excitons—quantum quasiparticles that carry energy without charge—at room temperature. Excitons form when light excites electrons in semiconductors, creating electron-hole pairs that move together as neutral energy packets. Unlike electrons, excitons do not generate heat through energy loss, making them promising candidates for more efficient computing technologies. The team overcame a major challenge by designing a nanostructured ridge that guides excitons along a controlled path and using electrodes as gates to switch exciton flow on and off, achieving an on-off switching ratio above 19 decibels. This breakthrough opens the door to excitonic circuits that could significantly reduce energy consumption and heat generation in computing systems, addressing current limitations faced by electronics in AI and other demanding applications. The researchers also demonstrated an optoexcitonic switch using light to propel excitons rapidly along the ridge, suggesting potential for faster and cooler data transfer in devices
quantum-excitonsexcitonicsnano-switchenergy-efficient-computingsemiconductor-technologyoptoelectronicssolar-cellsAdvanced DC breaker tech speeds up protection, cuts energy loss
Researchers at Oak Ridge National Laboratory (ORNL) have developed the world’s fastest medium-voltage direct current (DC) circuit breaker, leveraging semiconductor technology—specifically thyristors—to overcome limitations of traditional mechanical breakers. Unlike alternating current (AC), DC lacks a natural zero-crossing point to interrupt current flow, making mechanical breakers slow and prone to arcing and fire risks during faults. The new semiconductor-based breaker can interrupt 1,400 volts in under 50 microseconds, which is four to six times faster than previous thyristor-based systems, significantly enhancing safety and reliability for DC power grids. The design is scalable by connecting multiple breaker units in series, successfully tested up to 1,800 volts, with ongoing work targeting 10,000 volts to meet the demands of high-voltage DC grids. This breakthrough is critical for modern energy infrastructure, particularly for sectors like AI data centers and advanced manufacturing that benefit from DC’s higher efficiency, lower transmission losses, and support for multi-direction
energypower-gridDC-circuit-breakersemiconductor-technologymedium-voltagerenewable-energyelectrical-safetyBreakthrough silicon chip fuses photonics and quantum generators
Researchers from Boston University, UC Berkeley, and Northwestern University have developed the world’s first integrated electronic–photonic–quantum chip using standard 45-nanometer semiconductor technology. This breakthrough device combines twelve synchronized quantum light sources, known as “quantum light factories,” on a single chip, each generating correlated photon pairs essential for quantum computing, sensing, and secure communication. The chip integrates microring resonators, on-chip heaters, photodiodes, and embedded control logic to maintain real-time stabilization of the quantum light generation process, overcoming challenges posed by temperature fluctuations and manufacturing variations. The innovation lies in embedding a real-time feedback control system directly on the chip, enabling continuous correction of misalignments and drift, which is critical for scalable quantum systems. The team successfully adapted quantum photonics design to meet the stringent requirements of a commercial CMOS platform, originally developed for AI and supercomputing interconnects. This collaboration demonstrates that complex quantum photonic systems can be reliably built and stabilized within commercial semiconductor
quantum-computingphotonicssemiconductor-technologyquantum-light-sourcesintegrated-circuitsquantum-sensorschip-manufacturingWorld's first 2D material built computer completely ditches silicon
Researchers at Penn State have developed the world’s first computer built entirely from two-dimensional (2D) materials, completely eliminating the use of silicon. This innovative computer uses complementary metal-oxide-semiconductor (CMOS) technology based on two different 2D materials: molybdenum disulfide for n-type transistors and tungsten diselenide for p-type transistors. Unlike silicon, which faces performance degradation as devices shrink, these 2D materials maintain exceptional electronic properties even at atomic thickness, offering a promising path for faster, thinner, and more efficient electronics. The team employed metal-organic chemical vapor deposition (MOCVD) to grow large sheets of these 2D materials and fabricated over 1,000 transistors of each type. By fine-tuning fabrication and post-processing steps, they adjusted transistor threshold voltages to build fully functional CMOS logic circuits. The resulting 2D CMOS computer operates at low supply voltages with minimal power consumption and can perform simple logic operations at
2D-materialssemiconductor-technologyCMOS-computermolybdenum-disulfidetungsten-diselenidetransistor-fabricationsilicon-alternativeChina unveils world’s first non-binary AI chip for industry use
China has begun mass production of the world’s first non-binary AI chips, developed by Professor Li Hongge and his team at Beihang University. These chips integrate traditional binary logic with probabilistic computing through a novel Hybrid Stochastic Number (HSN) system, overcoming key limitations of current chip technologies such as high power consumption and poor compatibility with older systems. The new chips offer enhanced fault tolerance, power efficiency, and multitasking capabilities via system-on-chip (SoC) design and in-memory computing algorithms, making them suitable for applications in aviation, manufacturing, and smart control systems like touchscreens. The development leverages mature semiconductor fabrication processes, including a 110-nanometer process for initial touch and display chips and a 28 nm CMOS process for machine learning chips. The team’s innovations enable microsecond-level on-chip computing latency, balancing hardware acceleration with software flexibility. Future plans include creating specialized instructions and chip designs to further optimize hybrid probabilistic computing for complex AI tasks such as speech and image processing. Despite the promising advancements, challenges remain regarding compatibility and long-term reliability, indicating that widespread adoption and impact will require further development and validation.
AI-chipnon-binary-computingenergy-efficiencysemiconductor-technologymachine-learning-chipin-memory-computingsmart-control-systemsSilicon-free transistors with high electron mobility built in Japan
materialstransistorsgallium-doped-indium-oxideelectron-mobilitysemiconductor-technologyminiaturizationfield-effect-transistor