RIEM News LogoRIEM News

Articles tagged with "cloud-computing"

  • Andy Jassy says Amazon’s Nvidia competitor chip is already a multi-billion-dollar business

    Amazon CEO Andy Jassy announced at the AWS Re:Invent conference that the company’s AI chip business, centered on its Nvidia competitor Trainium, is already a multi-billion-dollar revenue run-rate enterprise. The current generation, Trainium2, boasts over one million chips in production and is used by more than 100,000 companies, powering the majority of usage on Amazon’s AI app development platform, Bedrock. Jassy emphasized that Trainium2 offers compelling price-performance advantages over other GPUs, making it a popular choice among AWS’s extensive cloud customer base. A significant portion of Trainium2’s revenue comes from Anthropic, a key AWS partner using over 500,000 Trainium2 chips in Project Rainier, Amazon’s large-scale AI server cluster designed to support Anthropic’s advanced model training needs. While other major AI players like OpenAI also use AWS, they primarily rely on Nvidia chips, underscoring the challenge of competing with Nvidia’s entrenched GPU technology and proprietary CUDA software

    energyAI-chipscloud-computingsemiconductor-technologyAmazon-TrainiumNvidia-competitordata-centers
  • All the biggest news from AWS’ big tech show re:Invent 2025

    At AWS re:Invent 2025, Amazon Web Services emphasized AI advancements focused on enterprise customization and autonomous AI agents. CEO Matt Garman highlighted a shift from AI assistants to AI agents capable of independently performing tasks and automating workflows, unlocking significant business value. Key announcements included expanded capabilities for AWS’s AgentCore platform, such as policy-setting features to control AI agent behavior, enhanced memory and logging functions, and 13 pre-built evaluation systems to help customers assess agent performance. AWS also introduced three new “Frontier agents” designed for coding, security reviews, and DevOps tasks, with preview versions already available. AWS unveiled its new AI training chip, Trainium3, promising up to 4x performance improvements and 40% lower energy use for AI training and inference. The company teased Trainium4, which will be compatible with Nvidia chips, signaling deeper integration with Nvidia technology. Additionally, AWS expanded its Nova AI model family with new text and multimodal models, alongside Nova Forge, a

    energyAI-chipscloud-computingAI-agentsNvidia-compatibilityAI-trainingAWS-re:Invent
  • Amazon releases an impressive new AI chip and teases a Nvidia-friendly roadmap  

    Amazon Web Services (AWS) has unveiled its latest AI training chip, Trainium3, along with the Trainium3 UltraServer system at its AWS re:Invent 2025 conference. Built on a 3-nanometer process, Trainium3 delivers significant improvements over its predecessor, offering more than four times the speed and memory capacity for AI training and inference. Each UltraServer can host 144 chips, and thousands of these servers can be linked to scale up to one million Trainium3 chips, representing a tenfold increase from the previous generation. Additionally, the new chips are 40% more energy efficient, aligning with AWS’s goal to reduce operational costs and energy consumption while providing cost savings to AI cloud customers. Early adopters such as Anthropic, Karakuri, Splashmusic, and Decart have already reported substantial reductions in inference costs using Trainium3. Looking ahead, AWS teased the development of Trainium4, which promises another major performance boost and will support Nvidia’s

    energyAI-chipcloud-computingdata-centerenergy-efficiencyNvidiaAWS
  • AWS is spending $50B build AI infrastructure for the US government

    Amazon Web Services (AWS) has announced a $50 billion investment to build specialized AI high-performance computing infrastructure tailored for U.S. government agencies. This initiative aims to significantly enhance federal access to AWS AI services, including Amazon SageMaker, model customization tools, Amazon Bedrock, model deployment, and Anthropic’s Claude chatbot. The project will add 1.3 gigawatts of computing power, with construction of new data centers expected to begin in 2026. AWS CEO Matt Garman emphasized that this investment will transform how federal agencies utilize supercomputing, accelerating critical missions such as cybersecurity and drug discovery, while removing technological barriers that have previously limited government AI adoption. AWS has a long history of working with the U.S. government, having started building cloud infrastructure for federal use in 2011. It launched the first air-gapped commercial cloud for classified workloads in 2014 and introduced the AWS Secret Region in 2017, which supports all security classification levels. This new AI infrastructure

    energyAI-infrastructurecloud-computinghigh-performance-computinggovernment-technologydata-centerssupercomputing
  • India’s TCS gets TPG to fund half of $2B AI data center project

    Tata Consultancy Services (TCS) has partnered with private equity firm TPG to secure $1 billion funding for the first half of a $2 billion multi-year project called “HyperVault,” aimed at building a network of gigawatt-scale, liquid-cooled, high-density AI data centers across India. This initiative addresses the country’s significant gap between its large data generation—nearly 20% of global data—and its limited data center capacity, which currently accounts for only about 3% of the global total. The new data centers will support advanced AI workloads and are designed to meet the growing demand for AI compute power amid rapid adoption of AI technologies in India. However, the project faces challenges related to resource constraints, including water scarcity, power supply, and land availability, especially in urban hubs like Mumbai, Bengaluru, and Chennai where data center concentration is high. Liquid cooling, while necessary for managing the heat from power-intensive AI GPUs, raises concerns about water usage, with estimates suggesting a

    energydata-centersAI-infrastructureliquid-coolingpower-consumptionwater-scarcitycloud-computing
  • Anthropic announces $50 billion data center plan

    Anthropic announced a significant $50 billion partnership with U.K.-based neocloud provider Fluidstack to build new data centers across Texas and New York, scheduled to come online throughout 2026. This investment aims to support the intense compute demands of Anthropic’s Claude AI models and advance AI capabilities that can accelerate scientific discovery and solve complex problems. CEO Dario Amodei emphasized the need for robust infrastructure to sustain frontier AI development. While Anthropic’s $50 billion commitment is substantial, it is smaller compared to competitors’ infrastructure investments, such as Meta’s $600 billion data center plan over three years and the $500 billion Stargate partnership involving SoftBank, OpenAI, and Oracle. The surge in AI infrastructure spending has raised concerns about a potential AI bubble. The deal also marks a major milestone for Fluidstack, a relatively young neocloud company founded in 2017, which has quickly become a preferred vendor in the AI sector with partnerships including Meta, Black Forest Labs, and

    energydata-centerscloud-computingAI-infrastructurecompute-powerneocloudtechnology-investment
  • Is physical world AI the future of autonomous machines? - The Robot Report

    The article discusses the emerging role of physical world AI—cloud-based systems integrated with AI models that create ultra high-precision, spatially aware representations of the physical environment—in advancing autonomous machines such as cars, drones, and tractors. While companies like Waymo have developed sophisticated onboard AI and navigation hardware, the article argues that relying solely on onboard compute is insufficient for widespread autonomous machine deployment. Instead, leveraging cloud-based spatial intelligence can enhance route optimization and hazard detection by providing machines with detailed, real-time environmental context beyond their immediate sensor inputs. Currently, most AI in autonomous machines operates locally on the edge, lacking awareness of the broader physical landscape. However, abundant data from satellites, drones, and other sources can feed cloud systems that process complex spatial information—such as vectors representing terrain and obstacles—making AI models more capable of understanding and navigating the environment. This spatial intelligence cloud approach, pursued by companies like Wherobots, can improve autonomous vehicle performance in challenging scenarios like rural deliveries or complex urban settings

    robotautonomous-machinesAIcloud-computingnavigation-technologydronesself-driving-cars
  • Sam Altman says OpenAI has $20B ARR and about $1.4 trillion in data center commitments

    In a recent statement, OpenAI CEO Sam Altman revealed that the company expects to surpass $20 billion in annualized revenue run rate by the end of 2025, with ambitions to grow to hundreds of billions by 2030. Altman also disclosed that OpenAI has approximately $1.4 trillion in data center commitments planned over the next eight years, reflecting the company’s aggressive expansion in infrastructure to support its AI operations. These figures were shared partly to clarify and respond to earlier comments made by OpenAI’s CFO that had caused some confusion. Altman outlined several future business initiatives poised to drive significant revenue growth. These include an upcoming enterprise offering, consumer devices, robotics, and ventures into scientific discovery through OpenAI for Science, a recently launched initiative. Additionally, OpenAI is exploring becoming a cloud computing provider by selling compute capacity directly to other companies and individuals, anticipating a growing demand for “AI cloud” services. Despite not owning its own data center network yet, OpenAI is positioning

    robotdata-centerscloud-computingAI-devicesenterprise-AIscientific-discoveryAI-cloud
  • Microsoft inks $9.7B deal with Australia’s IREN for AI cloud capacity

    Microsoft has secured a significant $9.7 billion, five-year contract with Australia-based IREN to expand its AI cloud computing capacity. This deal grants Microsoft access to advanced compute infrastructure equipped with Nvidia GB300 GPUs, which will be deployed in phases through 2026 at IREN’s facility in Childress, Texas, designed to support up to 750 megawatts of capacity. Separately, IREN is investing about $5.8 billion in GPUs and equipment from Dell to support this infrastructure expansion. The agreement follows Microsoft’s recent launch of AI models optimized for reasoning, agentic AI systems, and multi-modal generative AI, reflecting the company's efforts to meet growing demand for AI services. Microsoft has also previously acquired approximately 200,000 Nvidia GB300 GPUs for data centers in Europe and the U.S. IREN, originally a bitcoin-mining firm, has pivoted successfully to AI workloads, leveraging its extensive GPU resources. CEO Daniel Roberts anticipates that the Microsoft contract will utilize only

    energycloud-computingAI-infrastructureGPUsdata-centersMicrosoftNvidia
  • Google partners with Ambani’s Reliance to offer free AI Pro access to millions of Jio users in India

    Google has partnered with Mukesh Ambani-led Reliance Industries to offer its AI Pro subscription free for 18 months to eligible Jio 5G users in India, initially targeting those aged 18 to 25 before expanding nationwide. This collaboration provides access to Google’s Gemini 2.5 Pro AI model, enhanced AI image and video generation tools, expanded study and research capabilities via Notebook LM, and 2 TB of cloud storage across Google services. Valued at approximately $396, the offer aims to accelerate AI adoption among India’s vast internet user base and reflects Google’s strategy to deepen its AI presence in emerging markets. Beyond consumer access, Reliance and Google Cloud are collaborating to expand AI infrastructure in India, with Reliance Intelligence becoming a strategic partner to promote Gemini Enterprise among Indian organizations and develop AI agents on the platform. This partnership complements Reliance’s broader AI initiatives, including a joint venture with Meta to strengthen AI infrastructure through a ₹8.55 billion ($100 million) investment.

    IoTAI5Gcloud-computingtelecommunicationsartificial-intelligencetech-partnerships
  • Tata Motors confirms it fixed security flaws, which exposed company and customer data

    Indian automotive giant Tata Motors addressed multiple critical security vulnerabilities that exposed sensitive internal data, including personal customer information, company reports, and dealer data. Security researcher Eaton Zveare discovered these flaws in Tata Motors’ E-Dukaan e-commerce portal for spare parts, where the web source code contained private Amazon Web Services (AWS) keys. These keys granted access to hundreds of thousands of invoices with customer details such as names, addresses, and PAN numbers, as well as MySQL backups, Apache Parquet files, and over 70 terabytes of data related to Tata Motors’ FleetEdge tracking software. Additionally, Zveare found backdoor admin access to a Tableau account with data on over 8,000 users and API access to the company's fleet management platform, Azuga. After reporting the issues to Tata Motors via India’s CERT-In in August 2023, the company confirmed to TechCrunch that all vulnerabilities were thoroughly reviewed and fully remediated within the same year. Tata Motors emphasized its commitment

    IoTcybersecurityfleet-managementdata-securityautomotive-technologycloud-computingAWS
  • Mbodi will show how it can train a robot using AI agents at TechCrunch Disrupt 2025

    Mbodi, a New York-based startup founded by former Google engineers Xavier Chi and Sebastian Peralta, has developed a cloud-to-edge hybrid computing system designed to accelerate robot training using multiple AI agents. Their software integrates with existing robotic technology stacks and allows users to train robots via natural language prompts. The system breaks down complex tasks into smaller subtasks, enabling AI agents to collaborate and gather the necessary information to teach robots new skills more efficiently. Mbodi’s approach addresses the challenge of adapting robots to the infinite variability of real-world physical environments, where traditional robot programming is often too rigid and time-consuming. Since launching in 2024 with a focus on picking and packaging tasks, Mbodi has gained recognition by winning an ABB Robotics AI startup competition and securing a partnership with a Swiss robotics organization valued at $5.4 billion. The company is currently working on a proof of concept with a Fortune 100 consumer packaged goods (CPG) company, aiming to automate packing tasks that frequently change and are difficult to

    roboticsartificial-intelligenceAI-trainingcloud-computingedge-computingautomationrobotic-software
  • Amazon identifies the issue that broke much of the internet, but is still working to restore services

    Amazon Web Services (AWS) experienced a significant outage that disrupted large portions of the internet, affecting websites, banks, government services, and major apps such as Coinbase, Fortnite, Signal, and Zoom. The root cause was identified as a DNS resolution problem related to the DynamoDB API endpoints in the N. Virginia (us-east-1) region. DNS (Domain Name System) is crucial for translating web addresses into IP addresses, enabling websites and apps to load properly. Although AWS reported that the underlying DNS issue was fully mitigated by early Tuesday morning (2:24 AM PDT), the company was still working to fully restore all services. The outage, which began around 3 a.m. on the U.S. East Coast, also impacted Amazon’s own platforms, including Amazon.com, its subsidiaries, AWS customer support, and Ring video surveillance products. This incident highlights the critical role AWS plays in hosting websites, apps, and online systems for millions of companies worldwide, given its substantial share of the

    IoTcloud-computingAWS-outageDNS-resolutioninternet-infrastructurecybersecuritysmart-home-devices
  • Amazon, CMU partner on new AI Innovation Hub

    Amazon and Carnegie Mellon University (CMU) have launched the CMU-Amazon AI Innovation Hub to advance collaborative research in artificial intelligence, robotics, and cloud computing. The hub aims to support joint research projects, Ph.D. fellowships, and workshops that leverage both institutions’ expertise, with a focus on responsible AI development, advanced robotics, and next-generation cloud infrastructure. Amazon will provide significant funding and resources, while CMU will apply its interdisciplinary approach to accelerate innovation. The first research symposium is scheduled for October 28 at CMU’s Pittsburgh campus to foster collaboration and set research agendas. This partnership builds on existing collaborations between Amazon and CMU, emphasizing generative AI, natural-language processing, and robotics. Theresa Mayer, CMU’s vice president for research, highlighted the importance of combining academic discovery with practical application to drive societal benefits and expand knowledge frontiers. In parallel, Amazon is also investing in AI through industry partnerships, exemplified by the newly launched Physical AI Fellowship with MassRobotics and

    robotartificial-intelligencerobotics-researchAI-innovationcloud-computingacademic-partnershipAI-development
  • Nscale inks massive AI infrastructure deal with Microsoft

    Nscale, an AI cloud provider founded in 2024, has secured a major deal to deploy approximately 200,000 Nvidia GB300 GPUs across data centers in Europe and the U.S. This deployment will occur through Nscale’s own operations and a joint venture with investor Aker. Key locations include a Texas data center leased by Ionic Digital, which will receive 104,000 GPUs over 12 to 18 months, with plans to expand capacity to 1.2 gigawatts. Additional deployments include 12,600 GPUs at the Start Campus in Sines, Portugal (starting Q1 2026), 23,000 GPUs at Nscale’s Loughton, England campus (starting 2027), and 52,000 GPUs at Microsoft’s AI campus in Narvik, Norway. This deal builds on prior collaborations with Microsoft and Aker involving data centers in Norway and the UK. Josh Payne, Nscale’s founder and CEO, emphasized that this agreement positions Nscale as

    energyAI-infrastructuredata-centersGPUssustainabilitycloud-computingtechnology-investment
  • Meta partners up with Arm to scale AI efforts

    Meta has partnered with semiconductor design company Arm to enhance its AI systems amid a significant infrastructure expansion. The collaboration will see Meta’s ranking and recommendation systems transition to Arm’s technology, leveraging Arm’s strengths in low-power, efficient AI deployments. Meta’s head of infrastructure, Santosh Janardhan, emphasized that this partnership aims to scale AI innovation to over 3 billion users. Arm CEO Rene Haas highlighted the focus on performance-per-watt efficiency as critical for the next era of AI. This multi-year partnership coincides with Meta’s massive investments in AI infrastructure, including projects like “Prometheus,” a data center expected to deliver multiple gigawatts of power by 2027 in Ohio, and “Hyperion,” a 2,250-acre data center campus in Louisiana projected to provide 5 gigawatts of computational power by 2030. Unlike other recent AI infrastructure deals, Meta and Arm are not exchanging ownership stakes or physical infrastructure. This contrasts with Nvidia’s extensive investments in AI firms such

    energyAI-infrastructuredata-centerssemiconductorpower-consumptioncloud-computingMeta
  • Google to invest $15B in Indian AI infrastructure hub

    Google announced a $15 billion investment to establish a 1-gigawatt data center and AI hub in Visakhapatnam, Andhra Pradesh, India, over the next five years through 2030. This marks Google's largest investment in India and its biggest outside the U.S. The AI hub will be part of a global network spanning 12 countries and will offer a full suite of AI solutions, including custom Tensor Processing Units (TPUs), access to AI models like Gemini, and support for consumer services such as Google Search, YouTube, Gmail, and Google Ads. Google is partnering with Indian telecom Bharti Airtel and AdaniConneX to build the data center and subsea cable infrastructure, positioning Visakhapatnam as a global connectivity hub and digital backbone for India. The investment comes amid growing Indian government efforts to promote local alternatives to U.S. tech giants like Google, with initiatives encouraging “swadeshi” or “made in India” products and services. Despite these

    energydata-centerAI-infrastructurecloud-computingsubsea-cableconnectivity-hubIndia-investment
  • Where AI meets the windshield: smarter safety with VUEROID

    The article highlights how VUEROID is transforming traditional dash cams from passive recording devices into intelligent, AI-enhanced safety tools. Jessie Lee, a product planner at VUEROID, emphasizes the importance of reliable, high-quality video recording as the foundation of effective dash cams, rather than chasing flashy features like LTE connectivity or advanced driver-assistance systems (ADAS). VUEROID’s flagship model, the S1 4K Infinite, reflects this philosophy by prioritizing image quality, system reliability, and usability after incidents occur. VUEROID’s approach to AI is practical and focused on post-incident benefits, such as their AI-powered license plate restoration feature that enhances unclear footage to help identify vehicles involved in collisions. Additionally, their cloud-based AI supports privacy features like facial and license plate masking to protect sensitive data before sharing footage with insurers or on social media. A key technical strength lies in VUEROID’s expertise in Image Signal Processing (ISP) tuning, which optimizes image clarity

    IoTAIdash-camsautomotive-technologycloud-computingimage-processingvehicle-safety
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive investment and infrastructure buildup fueling the current AI boom, emphasizing the enormous computing power required to train and run AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. The piece details key deals, starting with Microsoft’s landmark $1 billion investment in OpenAI in 2019, which established Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership now valued at nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of close collaboration between AI firms and cloud providers has become standard, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for other AI ventures. Oracle’s emergence as a major AI infrastructure player is underscored by its unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in mid-2025

    energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAI
  • While OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them

    Microsoft CEO Satya Nadella announced the deployment of the company’s first massive AI system—referred to as an AI “factory” by Nvidia—at Microsoft Azure’s global data centers. These systems consist of clusters with over 4,600 Nvidia GB300 rack computers equipped with the new Blackwell Ultra GPU chips, connected via Nvidia’s high-speed InfiniBand networking technology. Microsoft plans to deploy hundreds of thousands of these Blackwell Ultra GPUs worldwide, enabling the company to run advanced AI workloads, including those from its partner OpenAI. This announcement comes shortly after OpenAI secured significant data center deals and committed approximately $1 trillion in 2025 to build its own infrastructure. Microsoft emphasized that, unlike OpenAI’s ongoing build-out, it already operates extensive data centers in 34 countries, positioning itself as uniquely capable of supporting frontier AI demands today. The new AI systems are designed to handle next-generation AI models with hundreds of trillions of parameters. Further details on Microsoft’s AI infrastructure expansion are

    energydata-centersAI-hardwareGPUscloud-computingNvidiaMicrosoft-Azure
  • Even after Stargate, Oracle, Nvidia and AMD, OpenAI has more big deals coming soon, Sam Altman says

    OpenAI has been actively securing large-scale infrastructure deals to support its rapidly growing AI model development, with major partnerships involving Nvidia, AMD, Oracle, and others. Nvidia has invested in OpenAI, becoming a shareholder, while AMD has granted OpenAI up to 10% of its stock in exchange for collaboration on next-generation AI GPUs. These deals include commitments for tens of gigawatts of AI data center capacity, such as OpenAI’s $500 billion Stargate deal with Oracle and SoftBank for U.S. facilities, and additional expansions in the UK and Europe. Nvidia is also preparing OpenAI for a future where it operates its own data centers, although the cost of such infrastructure—estimated at $50 to $60 billion per gigawatt—is currently beyond OpenAI’s direct financial capacity. OpenAI CEO Sam Altman emphasized that these partnerships are part of an aggressive infrastructure investment strategy to support more capable future AI models and products. Despite OpenAI’s revenue not yet approaching the scale of its

    energyAI-data-centersNvidiaAMDOpenAIcloud-computingsemiconductor-chips
  • A year after filing to IPO, still-private Cerebras Systems raises $1.1B

    Cerebras Systems, a Silicon Valley-based AI hardware company and competitor to Nvidia, raised $1.1 billion in a Series G funding round that values the company at $8.1 billion. This latest round, co-led by Fidelity and Atreides Management with participation from Tiger Global and others, brings Cerebras’ total funding to nearly $2 billion since its 2015 founding. The company specializes in AI chips, hardware systems, and cloud services, and has experienced rapid growth driven by its AI inference services launched in August 2024, which enable AI models to generate outputs. To support this growth, Cerebras opened five new data centers in 2025 across the U.S., with plans for further expansion in Montreal and Europe. Originally, Cerebras had filed for an IPO in September 2024 but faced regulatory delays due to a $335 million investment from Abu Dhabi-based G42, triggering a review by the Committee on Foreign Investment in the United States (CFIUS).

    AI-hardwaresemiconductordata-centerscloud-computingAI-inferencetechnology-fundingSilicon-Valley-startups
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. Central to this surge was Microsoft’s initial $1 billion investment in OpenAI in 2019, which positioned Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider relationships has become common, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for various AI firms. Oracle has emerged as a major player in AI infrastructure through unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in 2025 and a staggering $300 billion five-year compute power

    energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAI
  • Inside the Nuclear Bunkers, Mines, and Mountains Being Retrofitted as Data Centers

    The article explores the growing trend of repurposing underground spaces—such as former nuclear bunkers, mines, and mountain caverns—into highly secure data centers to protect critical digital infrastructure. One example is a Cold War-era Royal Air Force nuclear bunker in southeast England, now operated by Cyberfort Group as a cloud computing facility. This site, along with others worldwide, including former bomb shelters in China, Soviet command centers in Kyiv, and abandoned U.S. Department of Defense bunkers, has been transformed to serve as “future-proof” data storage locations. These subterranean centers leverage their inherent physical security and environmental stability to safeguard valuable digital data, reflecting a modern continuation of humanity’s ancient practice of storing precious items underground. The article also highlights notable underground data centers such as Stockholm’s Pionen bunker, the Mount10 AG complex in the Swiss Alps, and Iron Mountain’s facilities in former mines in the U.S. Additionally, the National Library of Norway and the Arctic World Archive in a rep

    data-centersenergy-infrastructureunderground-facilitiesdigital-storagecybersecuritycloud-computingenergy-efficiency
  • What’s behind the massive AI data center headlines?

    The article discusses the recent surge in massive AI data center investments in Silicon Valley, driven primarily by the needs of OpenAI and its partners. Nvidia announced significant infrastructure commitments, while OpenAI revealed plans to expand capacity through collaborations with Oracle and Softbank, adding gigawatts of new power to support future versions of ChatGPT. These individual deals are enormous, but collectively they highlight Silicon Valley’s intense efforts to provide OpenAI with the computational resources required to train and operate increasingly powerful AI models. OpenAI also introduced a new AI feature called Pulse, which operates independently of the ChatGPT app and is currently available only to its $200-per-month Pro subscribers due to limited server capacity. The company aims to expand such features to a broader user base but is constrained by the availability of AI data centers. The article raises the question of whether the hundreds of billions of dollars being invested in AI infrastructure to support OpenAI’s ambitions are justified by the value of features like Pulse. The piece also alludes to broader

    energydata-centersAI-infrastructurepower-consumptioncloud-computingserver-capacitySilicon-Valley-investments
  • OpenAI is building five new Stargate data centers with Oracle and SoftBank

    OpenAI is expanding its AI infrastructure by building five new Stargate data centers in collaboration with Oracle and SoftBank. Three of these centers are being developed with Oracle and are located in Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location. The remaining two centers are being developed with SoftBank, situated in Lordstown, Ohio, and Milam County, Texas. This expansion is part of OpenAI’s broader strategy to enhance its capacity for training and deploying more advanced AI models. Additionally, OpenAI recently announced a deal to acquire AI processors from a chipmaker, which will support further development of its AI data center network. The new Stargate data centers underscore OpenAI’s commitment to scaling its infrastructure to meet growing computational demands.

    energydata-centersAI-infrastructurechipmakerstechnology-partnershipscloud-computingenergy-efficiency
  • Swiss startup turns NASA-inspired Mars tech into jet crack detector

    Mondaic, a Swiss startup spun off from ETH Zurich, has adapted wave physics software originally developed to study Mars’s interior for use in monitoring infrastructure safety on Earth. Founded in 2018 by Christian Boehm and colleagues, the company repurposed modeling tools from NASA’s InSight Mars mission to non-invasively detect hidden structural flaws such as cracks, voids, and water infiltration in bridges, pipelines, and aircraft parts. Their technology works by sending waves through solid objects and comparing the wave behavior to a precise digital twin model, enabling identification and localization of damage without drilling or cutting. Transitioning from a research tool to a practical product required making the software stable, user-friendly, and fully automated. Leveraging cloud computing, Mondaic’s platform now performs complex wave analyses rapidly and is accessible to infrastructure teams without specialized wave physics knowledge. The system is currently employed in collaboration with the Swiss Federal Roads Office to inspect bridges, detecting early signs of damage to enable timely maintenance. Beyond

    materialsinfrastructure-monitoringwave-physicsdigital-twinnon-destructive-testingcloud-computingstructural-health-monitoring
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive financial investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang projects that $3 to $4 trillion will be spent on AI infrastructure by 2030, with significant contributions from AI companies themselves. Major tech players such as Microsoft, OpenAI, Meta, Oracle, Google, and Amazon are heavily investing in cloud services, data centers, and specialized hardware to support AI training and deployment. These efforts are straining power grids and pushing the limits of existing data center capacities. A pivotal moment in the AI infrastructure race was Microsoft’s initial $1 billion investment in OpenAI, which secured Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has since grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider deals has become common, with Amazon investing $8 billion in Anthropic and Nvidia committing $100 billion to

    energyAI-infrastructurecloud-computingdata-centerspower-gridsNvidiaMicrosoft-Azure
  • Why the Oracle-OpenAI deal caught Wall Street by surprise

    The recent surprise deal between OpenAI and Oracle caught Wall Street off guard but underscores Oracle’s continuing significance in AI infrastructure despite its legacy status. OpenAI’s willingness to commit substantial funds—reportedly around $60 billion annually for compute and custom AI chip development—signals its aggressive scaling strategy and desire to diversify infrastructure providers to mitigate risk. Industry experts highlight that OpenAI is assembling a comprehensive global AI supercomputing foundation, which could give it a competitive edge. Oracle’s involvement, while unexpected to some given its perceived diminished role compared to cloud giants like Google, Microsoft, and AWS, is explained by its proven capabilities in delivering large-scale, high-performance infrastructure, including supporting TikTok’s U.S. operations. However, key details about the deal remain unclear, particularly regarding how OpenAI will finance and power its massive compute needs. The company is burning through billions annually despite growing revenues from ChatGPT and other products, raising questions about sustainability. Energy sourcing is a critical concern since data centers are projected to

    energyAI-infrastructurecloud-computingsupercomputingdata-centerspower-consumptionOpenAI
  • Smart ring maker Oura’s CEO addresses recent backlash, says future is a ‘cloud of wearables’

    Oura CEO Tom Hale addressed recent backlash stemming from misinformation that the company shares user data with the U.S. government. Hale firmly denied these claims, clarifying that Oura’s health data—collected through its smart rings, including metrics like heart rate, sleep, and body temperature—is never shared or sold without explicit user consent. He explained that while Oura participates in a Department of Defense (DoD) program, the enterprise solution operates in a separate, secure environment inaccessible to the government. Hale also dispelled rumors about a significant partnership with Palantir, stating that Oura’s relationship is limited to a small commercial contract related to a DoD certification standard (Impact Level 5) and does not involve data sharing or system integration. Hale emphasized the company’s commitment to user privacy and security, noting that Oura’s terms of service explicitly oppose using user data for surveillance or prosecution. Access to user data is tightly controlled and only permitted with user authorization for specific purposes, such as

    IoTwearable-technologysmart-ringdata-privacyhealth-trackingcloud-computingcybersecurity
  • Oracle to back massive 1.4-gigawatt gas-powered data center in US

    Oracle is investing heavily in AI-focused cloud computing with the development of a massive 1.4-gigawatt data center campus in Shackelford County, Texas. The site, called Frontier and developed by Vantage Data Centers, will span 1,200 acres and include 10 data centers totaling 3.7 million square feet. Designed to support ultra-high-density racks and liquid cooling for next-generation GPU workloads, the campus aims to meet the growing demand for AI computing power. Construction is underway, with the first building expected to be operational in the second half of 2026. Oracle plans to operate the facility primarily using gas-powered generators rather than waiting for utility grid connections, reflecting the urgency to bring these data centers online despite the environmental concerns associated with gas turbine emissions. Oracle has transformed from a traditional database software company into a major cloud services provider focused on AI computing, securing significant deals such as hosting TikTok’s U.S. traffic and powering Elon Musk’s xAI. The company

    energydata-centercloud-computingAIgas-powerliquid-coolinghigh-density-racks
  • US lab taps Amazon cloud to build AI-powered nuclear reactors

    Idaho National Laboratory (INL), a leading U.S. Department of Energy nuclear research facility, has partnered with Amazon Web Services (AWS) to leverage advanced cloud computing and artificial intelligence (AI) for the development of autonomous nuclear reactors. This collaboration aims to create digital twins—virtual replicas—of small modular reactors (SMRs) ranging from 20 to 300 megawatts. Using AWS tools such as Bedrock, SageMaker, and custom AI chips (Inferentia, Trainium), INL plans to enhance modeling, simulation, and ultimately enable safe, self-operating nuclear plants. The initiative is designed to reduce costs, shorten development timelines, and modernize the nuclear energy sector, which has historically faced regulatory delays and high expenses. This partnership is part of a broader U.S. government strategy to integrate AI into nuclear energy infrastructure, supporting faster, safer, and smarter reactor design and operation. It follows a similar deal between Westinghouse and Google Cloud, signaling AI’s growing

    energyartificial-intelligencenuclear-reactorsdigital-twinscloud-computingautonomous-systemssmall-modular-reactors
  • 100x more precise: Autonomous systems to get accurate positioning

    Swift Navigation, a San Francisco-based company, has developed Skylark, a cloud-based precise positioning service that enhances the accuracy of conventional GNSS systems by 100 times, achieving centimeter-level precision critical for autonomous vehicles, robotics, and precision logistics. Unlike traditional GNSS accuracy of 3 to 10 meters, Skylark delivers sub-inch accuracy by correcting GNSS signal errors in real time. Notably, Skylark is the first real-time cloud service certified to meet the ISO 26262:2018 functional safety standards for road vehicles, enabling scalable, safety-certified positioning without relying on expensive physical data centers. Skylark’s advanced technology leverages atmospheric modeling, carrier-grade networks, and a cloud-native architecture to provide reliable, cost-effective, and high-integrity positioning at scale. The system currently supports over 10 million ADAS-enabled and autonomous vehicles globally and is integrated into programs with more than 20 automotive OEMs, Tier 1 suppliers, robotics companies, and large commercial fleet operators.

    robotautonomous-vehiclesprecise-positioningcloud-computingGNSSroboticsvehicle-autonomy
  • Amazon cloud powers US bid for autonomous next-gen nuclear reactors

    Idaho National Laboratory (INL) and Amazon Web Services (AWS) have partnered to leverage AWS’s cloud computing, AI foundation models via Amazon Bedrock, and specialized hardware to advance next-generation autonomous nuclear reactors. The collaboration aims to reduce the cost and time involved in designing, licensing, building, and operating nuclear facilities, with the long-term goal of enabling safe, reliable autonomous operation of advanced reactors to accelerate their deployment. INL will utilize AWS’s AI models and computing power to develop nuclear energy applications, including creating a digital twin—a virtual simulation model—of a small modular reactor (SMR) as a key initial project. This initiative is part of a broader strategy to foster collaboration among government labs, AI firms, and nuclear developers, enhancing reactor safety, efficiency, and responsiveness. The digital twin technology will allow near real-time simulations critical for autonomous control systems. The effort aligns with a growing trend of integrating AI into nuclear energy, exemplified by similar work at Oak Ridge National Laboratory, which

    energynuclear-energyautonomous-reactorsAI-in-energycloud-computingdigital-twinsmall-modular-reactors
  • OpenAI agreed to pay Oracle $30B a year for data center services

    OpenAI has confirmed it signed a landmark $30 billion per year deal with Oracle for data center services, a contract initially disclosed by Oracle in late June without naming the customer. This agreement is part of OpenAI’s ambitious Stargate project, a $500 billion initiative to build massive data center capacity. Specifically, the deal covers 4.5 gigawatts of power—equivalent to the output of two Hoover Dams—enough to power about four million homes. The data center, known as Stargate I, is being constructed in Abilene, Texas, and represents a significant expansion of infrastructure to support OpenAI’s rapidly growing computational needs. While the deal has propelled Oracle’s stock to record highs and made its founder Larry Ellison the world’s second richest person, the project poses substantial challenges. Building and operating such a large-scale data center will require enormous capital and energy expenditures. Oracle has already spent $21.2 billion on capital expenditures in its last fiscal year and plans to

    energydata-centerscloud-computingOpenAIOraclepower-capacityinfrastructure
  • India’s richest man wants to turn every TV into a PC

    Jio Platforms, the digital division of Reliance Industries led by India’s richest man Mukesh Ambani, has introduced JioPC, a cloud-based virtual desktop service aimed at transforming millions of TVs in India into PCs. Accessible via Jio’s set-top box—available free with home broadband or for purchase at ₹5,499 ($64)—the service is currently in free trial and requires users to connect a keyboard and mouse to their TV. While JioPC supports open-source LibreOffice pre-installed and allows Microsoft Office apps through a browser, it currently lacks support for external peripherals like cameras and printers. The initiative targets the large gap in PC ownership in India, where only 15% of households own a PC despite 70% having a TV. Industry experts see potential in JioPC to expand Reliance’s user base, which already exceeds 488 million, especially by reaching rural and low-income segments. However, challenges remain in educating consumers about using a PC on a TV and addressing

    IoTcloud-computingvirtual-desktopset-top-boxdigital-servicesbroadbandJio-Platforms
  • Nimble moves to cloud-based PTC development tools for logistics robots - The Robot Report

    Nimble, a developer of AI-powered logistics robots designed for picking, packing, and handling warehouse items, is transitioning from legacy file-based design and management tools to cloud-native platforms provided by PTC Inc. Specifically, Nimble is adopting PTC’s Onshape CAD and PDM platform alongside the Arena PLM and QMS system to enhance collaboration, reduce latency, and improve reliability across its teams. This shift to connected, cloud-native development tools was made swiftly—within 60 days of evaluation—and is aimed at supporting Nimble’s scaling efforts in manufacturing and R&D for its advanced mobile manipulator robots. PTC highlights that Onshape and Arena facilitate digital transformation by enabling more agile, collaborative workflows and efficient scaling, replacing traditional file-based systems with integrated cloud solutions. Onshape offers capabilities such as CAD, simulation, and built-in product data management accessible from any web-connected device, while Arena centralizes product information and processes to accelerate product development and introduction. Founded in 2017, Nim

    roboticslogistics-robotscloud-computingAI-robotswarehouse-automationPTC-Onshapeproduct-lifecycle-management
  • Want to know where VCs are investing next? Be in the room at TechCrunch Disrupt 2025

    TechCrunch Disrupt 2025, taking place October 27-29 at Moscone West in San Francisco, offers early-stage founders a valuable opportunity to hear directly from top venture capitalists about upcoming investment trends. A highlighted session on October 27 at 1:00 pm features Nina Achadjian (Index Ventures), Jerry Chen (Greylock), and Viviana Faga (Felicis), who will share their 2026 investment priorities across sectors such as AI, data, cloud, robotics, and more. These seasoned VCs will discuss emerging innovations and sectors attracting smart money, providing founders with insights into where venture capital is headed next. Each VC brings distinct expertise: Nina Achadjian focuses on automating overlooked functions and industries by replacing outdated tools, emphasizing founders with empathy, curiosity, and growth mindsets. Jerry Chen invests in product-driven founders working in AI, data, cloud infrastructure, and open-source technologies, leveraging his decade-long experience at VMware. Viviana Faga specializes

    robotAIcloud-computingventure-capitalautomationenterprise-softwareSaaS
  • Amazon joins the big nuclear party, buying 1.92 GW for AWS

    Amazon has joined a growing trend among major tech companies by securing 1.92 gigawatts of electricity from Talen Energy’s Susquehanna nuclear power plant in Pennsylvania to power its AWS cloud and AI servers. Unlike an earlier plan where Amazon intended to build a data center adjacent to the plant and draw power directly—bypassing the grid and transmission fees—regulatory concerns led to a revised agreement. The current deal positions Amazon as a grid-connected customer, paying transmission fees like other users, with the arrangement set to last through 2042. Transmission infrastructure upgrades are planned for spring 2026 to support this setup. Beyond the power purchase, Amazon and Talen Energy plan to explore building small modular reactors (SMRs) within Talen’s Pennsylvania footprint and expanding output at existing nuclear plants. Such expansions typically involve optimizing fuel enrichment, turbine upgrades, or other modifications to increase power generation. This move aligns Amazon with peers like Microsoft and Meta, who have also made significant investments in nuclear

    energynuclear-powerAWScloud-computingsmall-modular-reactorsclean-energypower-purchase-agreement
  • Cast AI raises $108M to get the most out of AI, Kubernetes and other workloads

    Cast-AIfundingAI-optimizationKubernetescloud-computingautomationworkload-management