RIEM News LogoRIEM News

Articles tagged with "AI-computing"

  • Orbiting Data Centers To Deploy Solar Power 24/7

    The article discusses the emerging concept of orbiting data centers powered by solar energy, highlighting a new collaboration between Singapore-based Orbit AI and Canadian solar firm PowerBank Corporation. Orbit AI aims to challenge established players like Elon Musk’s Starlink by developing a decentralized satellite network called “DeStarlink” and deploying AI-powered satellites such as Genesis-1. This satellite is equipped with NVIDIA AI compute cores to process infrared remote sensing data in real time, significantly reducing data retrieval times and transmission costs. The project emphasizes the advantages of space-based data centers, including limitless solar power and natural cooling, which overcome terrestrial limitations. PowerBank, a decade-old company focused on accelerating the energy transition through solar power and energy storage, has partnered with Orbit AI to develop the “Orbital Cloud” infrastructure. This system integrates satellite technology, AI computing, blockchain verification, and solar-powered data centers in low Earth orbit to provide censorship-resistant global connectivity and in-orbit compute services. PowerBank also highlights the growing market potential, projecting

    energysolar-powersatellite-technologyAI-computingdata-centersspace-technologyrenewable-energy
  • Uber robotaxi built on Lucid Gravity starts on-road tests with Nuro

    Lucid Group, Nuro, and Uber have jointly unveiled a production-intent robotaxi built on the all-electric Lucid Gravity platform, integrating Nuro’s Level 4 autonomous driving technology with Uber’s ride-hailing operations. Revealed at CES 2026, the robotaxi aims to deliver a premium passenger experience while enabling large-scale autonomous deployment. Autonomous on-road testing began in December 2025 in the San Francisco Bay Area, marking a critical step toward the planned commercial launch later in 2026. Nuro leads the testing with engineering prototypes supervised by operators to validate safety, performance, and reliability in real-world conditions. The robotaxi features a next-generation sensor suite combining high-resolution cameras, solid-state lidar, and radar, integrated into a low-profile roof-mounted halo designed to maintain the Lucid Gravity’s aesthetic. Inside, passengers can control comfort settings and view real-time visualizations of the vehicle’s perception and planned maneuvers, enhancing transparency and rider confidence. The vehicle’s autonomous

    robotautonomous-vehiclesrobotaxiIoT-sensorsAI-computingelectric-vehiclesLevel-4-autonomy
  • Qualcomm, ZF, And Mobileye Offer New ADAS Systems - CleanTechnica

    The article discusses recent advancements in Advanced Driver Assistance Systems (ADAS) from major industry players Qualcomm, ZF, and Mobileye, highlighting the growing momentum in autonomous driving technology beyond Tesla’s well-known Full Self Driving efforts. Mercedes-Benz’s latest CLA model, powered by Nvidia, exemplifies current Level 2 ADAS capabilities, using a sophisticated sensor array (10 cameras, 5 radars, 12 ultrasonic sensors) and immense computing power (508 trillion operations per second) to assist with city driving and navigation. This showcases how automakers are integrating advanced driver aids that enhance safety and convenience without full autonomy. A key development is the collaboration between Qualcomm and Tier One supplier ZF, announced in January 2025, which delivers a scalable, AI-powered ADAS platform based on Qualcomm’s Snapdragon Ride system-on-chips and ZF’s ProAI supercomputer. This turnkey solution supports automation levels up to SAE Level 3 and is designed for easy integration by automakers, reducing their R&D burden

    robotautonomous-vehiclesADASQualcommZFAI-computingautomotive-technology
  • AMD hardware-powered humanoid robot uses body as computing system

    Italian robotics company Generative Bionics unveiled its humanoid robot concept, GENE.01, at CES 2026. Scheduled for commercial launch in late 2026, GENE.01 is designed around the principle of Physical AI, using its entire body as a computing system. The robot features a full-body tactile skin embedded with a distributed network of touch and force sensors, enabling it to sense contact, pressure, and subtle physical interactions. This tactile input is integrated into its core decision-making processes, allowing real-time responses to human touch or collisions, thereby facilitating safer and more natural human-robot interactions. Powered by AMD’s suite of CPUs, GPUs, embedded processors, and FPGA-based systems, GENE.01 processes sensory data locally near the sensors rather than relying on a centralized brain. This distributed computing approach enables split-second reactions and smoother movements, reflecting an efficiency inspired by human intelligence residing both in the brain and body. Generative Bionics emphasizes openness by leveraging AMD-supported open-source

    roboticshumanoid-robotphysical-AItactile-sensorsAMD-processorsindustrial-automationAI-computing
  • China's wheeled robot dog climbs stairs at 5 feet per second in demo

    Pudu Robotics recently released a video showcasing its PUDU D5 wheeled quadruped robot climbing stairs at a speed of 1.5 meters per second (nearly 5 feet per second) in real time, without edits. The robot demonstrates a hybrid locomotion system, seamlessly switching between wheels on flat terrain and legs for stair climbing, enabling efficient navigation of mixed environments with smooth surfaces and sudden elevation changes. This hybrid approach distinguishes the D5 from other quadrupeds that rely solely on legged movement, emphasizing speed and fluidity. Unveiled in December, the PUDU D5 Series includes two configurations: a fully legged version and a wheeled variant optimized for mixed terrain. Designed for autonomous operation in complex outdoor and industrial settings, the D5 integrates powerful onboard computing using NVIDIA’s Orin platform and an RK3588 chip, supporting real-time mapping, obstacle avoidance, and path planning without constant human supervision. Its 360-degree perception system combines fisheye

    robotquadruped-robotautonomous-navigationhybrid-locomotionindustrial-roboticsAI-computingLiDAR-sensors
  • China activates 1,240-mile-wide ‘giant computer’, offers 98% efficiency of single data centre

    China has activated the world’s largest distributed AI computing pool, known as the Future Network Test Facility (FNTF), spanning approximately 1,243 miles and linking data centers across 40 cities via a high-speed optical network. This system operates with about 98% of the efficiency of a single unified data center, enabling it to handle demanding workloads such as training large AI models, telemedicine, and real-time industrial applications. The network significantly reduces AI training times—for example, cutting iteration times from over 36 seconds to about 16 seconds—thereby potentially shortening training cycles by months. The FNTF supports China’s broader strategy to build a nationwide computing-power platform, complementing initiatives like the “East Data West Computing” project and investments in emerging technologies such as photonic and quantum-enhanced chips. The facility boasts high throughput, reliability, and deterministic transmission, capable of supporting 128 heterogeneous networks and running 4,096 service trials simultaneously. While it promises revolutionary improvements in AI development

    energyAI-computingdata-centersdistributed-computingoptical-networkindustrial-internettelemedicine
  • Data center energy demand forecasted to soar nearly 300% through 2035

    A BloombergNEF report forecasts that data center electricity demand will nearly triple by 2035, rising from 40 gigawatts today to 106 gigawatts. This surge is driven by the construction of significantly larger facilities, many located in rural areas due to urban site scarcity. Currently, only 10% of data centers consume over 50 megawatts, but future centers are expected to average over 100 megawatts, with nearly 25% exceeding 500 megawatts and some surpassing 1 gigawatt. Additionally, data center utilization rates are projected to increase from 59% to 69%, largely fueled by AI workloads, which will account for nearly 40% of total compute. The report highlights a sharp upward revision from earlier forecasts, attributed to a doubling of early-stage projects between early 2024 and 2025. Much of the new capacity is planned in states within the PJM Interconnection region—such as Virginia, Pennsylvania, Ohio, Illinois

    energydata-centerselectricity-demandenergy-consumptionAI-computingpower-infrastructureenergy-forecast
  • If the US Has to Build Data Centers, Here’s Where They Should Go

    A recent analysis examining the environmental footprint of AI-related data centers in the US reveals that the current favored locations for these facilities may not be the most sustainable choices. With tech giants like Meta and OpenAI committing hundreds of billions to trillions of dollars toward US data center infrastructure, the study highlights the urgent need to consider environmental impacts—particularly carbon emissions and water usage—when deciding where to build. The research, led by Cornell professor Fengqi You, emphasizes that data centers’ environmental costs vary significantly depending on their location, due to differences in energy grid cleanliness and water availability for cooling. The analysis identifies states such as Texas, Montana, Nebraska, and South Dakota as optimal for future AI data center installations because they balance access to cleaner energy and sufficient water resources. In contrast, traditional hubs like Virginia and California, while popular due to proximity to tech hubs and fiber connectivity, face challenges: Virginia’s heavy data center energy demand could hinder its clean energy goals, and California’s chronic water scarcity poses risks for

    energydata-centersAI-computingenvironmental-impactrenewable-energywater-usagecarbon-footprint
  • China’s compact AI server claims 90% lower power consumption

    China’s Guangdong Institute of Intelligent Science and Technology (GDIIST) has unveiled BIE-1, a compact AI supercomputer roughly the size of a mini refrigerator that reduces power consumption by 90% compared to traditional supercomputers. Developed in collaboration with Zhuhai Hengqin Neogenint Technology and Suiren Medical Technology, BIE-1 integrates 1,152 CPU cores, 4.8 terabytes of DDR5 memory, and 204 terabytes of storage. It employs brain-inspired neural networks and AI algorithms to deliver advanced computational capabilities, including high-speed training and inference of multiple data types such as text, images, and speech. The device operates quietly and maintains CPU temperatures below 70°C, while running efficiently on a standard household power socket. The BIE-1’s design addresses the challenges of traditional supercomputers, which require large physical spaces and consume massive amounts of energy for both computing and cooling. Its portability and low power usage make it suitable for deployment in

    energyAI-computingsupercomputerlow-power-consumptionsustainable-technologyGuangdong-Institute-of-Intelligent-Science-and-Technologycompact-server
  • DGX Spark: NVIDIA unveils its smallest AI computer at $3,999

    NVIDIA has launched the DGX Spark, touted as the world’s smallest AI supercomputer, priced at $3,999. This compact 2.6-pound device integrates the new GB10 Grace Blackwell Superchip, which combines a 20-core Arm-based Grace CPU with a Blackwell GPU featuring CUDA cores equivalent to the RTX 5070 graphics card. Optimized for desktop AI development, the DGX Spark delivers up to 1,000 trillion operations per second using fifth-generation Tensor Cores and FP4 support, supported by NVLink-C2C interconnect technology for high-bandwidth CPU-GPU communication. It comes equipped with 128GB of shared LPDDR5x memory, 4TB NVMe storage, and connectivity options including USB-C, Wi-Fi 7, and HDMI, running on NVIDIA’s Ubuntu-based DGX OS preloaded with AI tools. Designed for developers, researchers, and students, the DGX Spark enables local fine-tuning and deployment of large AI

    robotAI-computingNVIDIA-DGX-SparkAI-developmentrobotics-simulationAI-hardwareedge-AI-computing
  • Stable earnings emerge as fresh opportunities for BTC owners

    The article highlights the launch of PlanMining’s innovative cloud mining application, which offers Bitcoin holders worldwide a new, accessible way to mine Bitcoin without the need for expensive hardware or technical expertise. By simply using a smartphone and internet connection, users can participate in Bitcoin mining and earn real-time returns. The platform leverages AI-powered intelligent computing power scheduling and operates on 100% green energy data centers, ensuring an efficient, secure, and environmentally friendly mining experience. PlanMining’s app features a user-friendly interface, flexible contract options, and automatic daily settlement and withdrawal, promoting ease of use and continuous Bitcoin appreciation. To provide stable and reliable income, PlanMining denominates all mining contracts in USD, reducing exposure to cryptocurrency price volatility. The AI-driven system dynamically adjusts computing power allocation to maintain mining efficiency and mitigate revenue fluctuations. Users can select contracts based on their risk preferences and monitor returns in real time, with the option to withdraw or reinvest earnings flexibly. The platform emphasizes security through bank-level encryption,

    IoTenergyAI-computingcloud-mininggreen-energydigital-assetscryptocurrency-mining
  • How workers escape paycheck-to-paycheck with cloud mining

    The article discusses how cloud mining platforms, specifically Ripplecoin Mining, provide an accessible way for ordinary workers to generate stable supplementary income without the need for technical expertise or hardware investment. Cloud mining allows users to purchase contracts that leverage AI-powered computing resources in green energy data centers to mine various cryptocurrencies like USDT, XRP, Bitcoin, and Ethereum. This hands-off approach eliminates the need to monitor volatile crypto markets constantly, offering daily profit settlements that can be withdrawn anytime. Ripplecoin Mining, founded in 2017 and based in London, emphasizes ease of use, transparency, compliance with regulations, and environmental sustainability. Users simply register, select a contract based on their budget, and start earning daily returns automatically. Contract options range from small short-term trials to high-yield long-term plans, catering to both beginners and experienced investors. The platform’s security measures and renewable energy use further enhance its appeal. The article highlights a case study of a mid-level office worker who achieved a stable daily profit through Ripplecoin Mining

    energycloud-miningcryptocurrencyAI-computinggreen-energydata-centersblockchain
  • NVIDIA Jetson Thor computer gives humanoid robots 7.5x power boost

    NVIDIA has launched the Jetson AGX Thor developer kit and production modules, delivering a significant leap in AI computing power for robotics applications. The Jetson Thor offers up to 2,070 FP4 teraflops of AI compute and 128 GB of memory within a 130-watt power envelope, providing 7.5 times more AI performance and 3.5 times greater energy efficiency than its predecessor, Jetson Orin. Powered by NVIDIA’s Blackwell GPU, the system can run multiple AI models simultaneously, including vision-language-action models and large language models, enabling robots to perceive, reason, and act in real time without relying on cloud servers. This makes it suitable for a wide range of applications, from humanoid robots and industrial machines to surgical assistants and precision farming. The Jetson Thor platform is supported by NVIDIA’s comprehensive software stack, including Isaac for robotics simulation, Metropolis for vision AI, and Holoscan for sensor processing. Early adopters such as Amazon

    robotAI-computinghumanoid-robotsNVIDIA-Jetson-Thorindustrial-robotsedge-AIrobotics-development
  • China's data centers are pushing cooling to the limit

    China’s rapid expansion in AI computing power has led to a significant increase in data center energy consumption and heat generation, pushing traditional air cooling methods to their limits. High-power AI chips, such as Huawei’s Ascend 910B and 910C, consume substantial energy, resulting in power densities per rack exceeding 15 kW and sometimes approaching 30 kW. This intense heat output has made air cooling inefficient due to increased noise, energy use, and maintenance challenges. Consequently, China is increasingly adopting liquid cooling technologies, especially cold plate liquid cooling, which offers efficient heat dissipation and easier retrofitting compared to immersion cooling. The liquid-cooled server market in China reached $2.37 billion in 2024, growing 67% year-over-year, with projections to hit $16.2 billion by 2029. This growth is driven by national strategies like “East Data West Computing” and policies promoting green data centers with power usage effectiveness (PUE) targets below 1

    energydata-centerscooling-technologyliquid-coolingAI-computingpower-usage-effectivenessChina-technology
  • Instead of selling to Meta, AI chip startup FuriosaAI signed a huge customer

    South Korean AI chip startup FuriosaAI recently announced a partnership to supply its AI chip, RNGD, to enterprises using LG AI Research’s EXAONE platform, a next-generation hybrid AI model optimized for large language models (LLMs). This collaboration targets multiple sectors including electronics, finance, telecommunications, and biotechnology. The deal follows FuriosaAI’s decision to reject Meta’s $800 million acquisition offer three months prior, citing disagreements over post-acquisition strategy and organizational structure rather than price. FuriosaAI’s CEO June Paik emphasized the company’s commitment to remaining independent and advancing sustainable AI computing. The partnership with LG AI Research is significant as it represents a rare endorsement of a competitor to Nvidia by a major enterprise. FuriosaAI’s RNGD chip demonstrated 2.25 times better inference performance and greater energy efficiency compared to competitive GPUs when running LG’s EXAONE models. Unlike general-purpose GPUs, FuriosaAI’s hardware is specifically designed for AI computing, lowering total cost of ownership while

    AI-chipsFuriosaAILG-AI-Researchenergy-efficiencyAI-computingsemiconductor-materialsAI-hardware
  • CoreWeave acquires data center provider Core Scientific in $9B stock deal

    CoreWeave has agreed to acquire Core Scientific, a data center infrastructure provider, in a $9 billion all-stock transaction. This acquisition will significantly expand CoreWeave’s data center capacity by more than one gigawatt, enabling the company to offer substantial resources for AI training and inference workloads. Both companies have histories in Bitcoin mining, but the focus is now shifting toward utilizing GPUs for running and training generative AI models. The deal highlights the ongoing race among cloud infrastructure providers to scale their data center capabilities to meet the growing computational demands of AI companies. This move follows other large-scale expansions in the industry, such as Oracle’s recent agreement to provide an additional 4.5 gigawatts of data center capacity, further emphasizing the critical importance of infrastructure growth in supporting AI development.

    energydata-centersAI-computingGPUcloud-infrastructureCoreWeaveCore-Scientific