SONY AI | Project Ace, for the first time AI/robotics is competitive against pro table tennis players. Reddit
Full source: https://www.youtube.com/watch?v=FrGq8ltb-_E
Nature paper: https://www.nature.com/articles/s41586-026-10338-5
Full source: https://www.youtube.com/watch?v=FrGq8ltb-_E
Nature paper: https://www.nature.com/articles/s41586-026-10338-5
Sony AI's autonomous table-tennis robot Ace has become the first robot to compete against top-level human players. Reuters reports: Ace, created by the Japanese company Sony's AI research division, is the first robot to attain expert-level performance in a competitive physical sport, one that requires rapid decisions and precision execution, the project's leader said. Ace did so by employing high-speed perception, AI-based control and a state-of-the-art robotic system. There have been various ping-pong-playing robots since 1983, but until now they were unable to rival highly skilled human competitors. Ace changed that with its performances against human elite-level and professional players in matches following the rules of the International Table Tennis Federation, the sport's governing body, and officiated by licensed umpires.
The project's goal was not only to compete at table tennis but to develop insights into how robots can perceive, plan and act with human-like speed and precision in dynamic environments. In matches detailed in the study, Ace in April 2025 won three out of five versus elite players and lost two matches against professional players, the top skill level in the sport. Sony AI said that since then Ace beat professional players in December 2025 and last month. "The success of Ace, with its perception system and learning-based control algorithm, suggests that similar techniques could be applied to other areas requiring fast, real-time control and human interaction -- such as manufacturing and service robotics, as well as applications across sports, entertainment and safety-critical physical domains," said Peter Durr, director of Sony AI Zurich and leader for Sony AI's project Ace.
The findings have been published in the journal Nature.

OpenAI is giving users of its Business, Enterprise, Edu, and Teachers plans access to cloud-based "workspace" agents available in ChatGPT that can perform business tasks. In its blog post, OpenAI gives examples of agents like one that finds product feedback on the web and sends a report in Slack and a sales agent that can draft follow-up emails in Gmail.
These new agents follow increasing interest in agents across the AI landscape, especially after OpenClaw - the AI agent formerly known as Clawdbot and Moltbot that touts itself as the "AI that actually does things" - went viral. OpenClaw founder Peter Steinberger now works for OpenAI. OpenA …
Confidence is persuasive. In artificial intelligence systems, it is often misleading.
Today's most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they're right or guessing. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy.
The technique, called RLCR (Reinforcement Learning with Calibration Rewards), trains language models to produce calibrated confidence estimates alongside their answers. In addition to coming up with an answer, the model thinks about its uncertainty in that answer, and outputs a confidence score. In experiments across multiple benchmarks, RLCR reduced calibration error by up to 90 percent while maintaining or improving accuracy, both on the tasks the model was trained on and on entirely new ones it had never seen. The work will be presented at the International Conference on Learning Representations later this month.
The problem traces to a surprisingly simple source. The reinforcement learning (RL) methods behind recent breakthroughs in AI reasoning, including the training approach used in systems like OpenAI's o1, reward models for getting the right answer, and penalize them for getting it wrong. Nothing in between. A model that arrives at the correct answer through careful reasoning receives the same reward as one that guesses correctly by chance. Over time, this trains models to confidently answer every question they are asked, whether they have strong evidence or are effectively flipping a coin.
That overconfidence has consequences. When models are deployed in medicine, law, finance, or any setting where users make decisions based on AI outputs, a system that expresses high confidence regardless of its actual certainty becomes unreliable in ways that are difficult to detect from the outside. A model that says "I'm 95 percent sure" when it is right only half the time is more dangerous than one that simply gets the answer wrong, because users have no signal to seek a second opinion.
"The standard training approach is simple and powerful, but it gives the model no incentive to express uncertainty or say I don’t know," says Mehul Damani, an MIT PhD student and co-lead author on the paper. "So the model naturally learns to guess when it is unsure."
RLCR addresses this by adding a single term to the reward function: a Brier score, a well-established measure that penalizes the gap between a model's stated confidence and its actual accuracy. During training, models learn to reason about both the problem and their own uncertainty, producing an answer and a confidence estimate together. Confidently wrong answers are penalized. So are unnecessarily uncertain correct ones.
The math backs it up: the team proved formally that this type of reward structure guarantees models that are both accurate and well-calibrated. They then tested the approach on a 7-billion-parameter model across a range of question-answering and math benchmarks, including six datasets the model had never been trained on.
The results showed a consistent pattern. Standard RL training actively degraded calibration compared to the base model, making models worse at estimating their own uncertainty. RLCR reversed that effect, substantially improving calibration with no loss in accuracy. The method also outperformed post-hoc approaches, in which a separate classifier is trained to assign confidence scores after the fact. "What’s striking is that ordinary RL training doesn't just fail to help calibration. It actively hurts it," says Isha Puri, an MIT PhD student and co-lead author. "The models become more capable and more overconfident at the same time."
The team also demonstrated that the confidence estimates produced by RLCR are practically useful at inference time. When models generate multiple candidate answers, selecting the one with the highest self-reported confidence, or weighting votes by confidence in a majority-voting scheme, improves both accuracy and calibration as compute scales.
An additional finding suggests that the act of reasoning about uncertainty itself has value. The researchers trained classifiers on model outputs and found that including the model's explicit uncertainty reasoning in the input improved the classifier's performance, particularly for smaller models. The model's self-reflective reasoning about what it does and doesn’t know contains real information, not just decoration.
In addition to Damani and Puri, other authors on the paper are Stewart Slocum, Idan Shenfeld, Leshem Choshen, and senior authors Jacob Andreas and Yoon Kim.
Google announced two new tensor processing units (TPUs) for the "agentic era," with separate processors dedicated to training and inference. "With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving," Amin Vahdat, a Google senior vice president and chief technologist for AI and infrastructure, said in a blog post. Both chips will become available later this year. CNBC reports: After years of producing chips that can both train artificial intelligence models and handle inference work, Google is separating those tasks into distinct processors, its latest effort to take on Nvidia in AI hardware. [...] None of the tech giants are displacing Nvidia, and Google isn't even comparing the performance of its new chips with those from the AI chip leader. Google did say the training chip enables 2.8 times the performance of the seventh-generation Ironwood TPU, announced in November, for the same price, while performance is 80% better for the inference processor.
Nvidia said its upcoming Groq 3 LPU hardware will draw on large quantities of static random-access memory, or SRAM, which is used by Cerebras, an AI chipmaker that filed to go public earlier this month. Google's new inference chip, dubbed TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, triple the amount in Ironwood. The architecture is designed "to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively," Sundar Pichai, CEO of Google parent Alphabet, wrote in a blog post.

Humans have been building ping-pong playing robots for decades, such as Omron's FORPHEUS that challenged amateur competitors at CES 2017. What sets Ace apart from the rest is that the robot, which was developed by Sony's AI division, is the first that can hold its own against top-ranked human players and occasionally even beat them in matches that follow the official rules of the International Table Tennis Federation (ITTF).
AI is already capable of besting humans at games like Chess and Go, but physical games pose a much greater challenge as robots have to be engineered to match the speed and responsiveness of the human mind and body. To b …
CATL unveiled a new wave of EV battery tech, "including a lighter battery pack rated for a 1,000-km (621-mile) driving range and an upgraded fast-charging battery that can go from 10 percent to 98 percent in under seven minutes," reports Interesting Engineering. From the report: The launches were made during a 90-minute event in Beijing ahead of the Beijing Auto Show, where automakers are expected to showcase next-generation EVs and connected technologies. CATL said its latest Qilin battery -- a high-energy-density pack often paired with nickel manganese cobalt (NMC) cells for long range and improved space efficiency -- can deliver a 1,000-km (621-mile) driving range. It is designed to deliver long range while reducing battery pack weight.
The company said the product is aimed at automakers facing tighter efficiency rules in China and other markets. It also rolled out an upgraded Shenxing battery -- CATL's fast-charging lithium iron phosphate (LFP) pack -- that targets one of the biggest barriers to EV adoption: charging time. CATL said the pack can recharge from 10 percent to 98 percent in less than seven minutes.
The new Shenxing battery marks a significant improvement over CATL's previous version, which charged from 5 percent to 80 percent in 15 minutes, according to Financial Times. [...] The company also announced plans to begin mass delivery of sodium-ion batteries in the fourth quarter. Sodium-ion technology is seen as a lower-cost alternative that could reduce dependence on lithium, cobalt, and nickel.
Physicists have spent the last 20 years pondering an apparent discrepancy between experimental results and theoretical predictions for the magnetic properties of the muon, the electron's heavier cousin—a mismatch that hinted at a possible fifth force. But according to a new paper published in the journal Nature, the discrepancy is due to a calculation fluke, not exciting new physics, so the Standard Model of particle physics is still holding strong.
“There were many calculations in the last 60 years or so, and as they got more and more precise, they all pointed toward a discrepancy and a new interaction that would upend known laws of physics,” said co-author Zoltan Fodor, a physicist at Penn State University. “We applied a new method to calculate this discrepancy quantity, and we showed that it’s not there. This new interaction we hoped for simply is not there. The old interactions can explain the value completely.”
As previously reported, the muon (a member of the lepton classification) is the heavier second-generation cousin of the electron—the tau is the third-generation cousin—and that makes muons particularly sensitive to virtual particles popping into and out of existence in the quantum vacuum, since they can briefly interact with those virtual particles. Muons are special to physicists because they are light enough to be plentiful yet heavy enough to be used experimentally to probe the accuracy of the Standard Model of particle physics.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now.
The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world.
We’ll be unpacking one item from the list each day here in The Download, explaining what it means and why it matters. Read the full rundown now—and stay tuned for the days ahead.
As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region.
President Donald Trump recently threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why.
—Casey Crownhart
This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 An unauthorized group has reportedly accessed Anthropic’s Mythos
Users in a private online forum may have gained access. (Bloomberg $)
+ Anthropic said the model was too dangerous for a full release. (Axios)
+ Mozilla used it to find 271 security vulnerabilities in Firefox. (Wired $)
2 Meta will track workers’ clicks and keystrokes for AI training
Tracking software is being installed on workers’ computers.(Reuters $)
+ Employees are up in arms about the program. (Business Insider)
+ LLMs could supercharge mass surveillance in the US. (MIT Technology Review)
3 ChatGPT allegedly advised the Florida State shooter
About when and where to strike, and which ammunition to use. (Washington Post $)
+ Florida’s attorney general is probing ChatGPT’s role in the shooting. (Ars Technica)
+ Does AI cause delusions or just amplify them? (MIT Technology Review)
4 SpaceX has secured the option to buy AI startup Cursor for $60 billion
Or pay $10 billion for the work they’re doing together. (The Verge)
+ SpaceX made the deal as it prepares to go public. (NYT $)
+ Musk’s endgame for the company may be a land grab in space. (The Atlantic $)
5 The Pentagon wants $54 billion for drones
That would rank among the top 10 military budgets for entire nations. (Ars Technica)
+ Shoplifters could soon be chased down by drones. (MIT Technology Review)
6 Apple’s new chief hardware officer signals a sprint to build in-house chips
Apple silicon lead Johny Srouji has been promoted to the role. (CNBC)
7 China’s government is tightening its grip on AI firms that try to leave
It’s doing all it can to stop firms like Manus sending talent and research overseas. (Washington Post $)
8 The FBI is probing the deaths of scientists tied to sensitive research
Including a nuclear physicist and MIT professor shot outside his home. (CNN)
9 The US is accelerating research into psychedelic medical treatment
Including the mysterious ibogaine. (Nature)
+ But psychedelics are (still) falling short in clinical trials. (MIT Technology Review)
10 The first retail boutique run by an AI agent has opened—and it’s chaos
The San Francisco shop is reassuringly mismanaged. (NYT $)
Quote of the day
—Donald Trump pays a classy tribute to Tim Cook on Truth Social.
One More Thing

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death. His idea? Replace your body parts. All of them. Even your brain.
Jean Hébert, a program manager at the US Advanced Research Projects Agency for Health (ARPA-H), believes we can beat aging by adding youthful tissue to people’s brains. Read the full story on his futuristic plan to extend human life.
—Antonio Regalado
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ A Lego set was sent to the edge of space—and survived.
+ Go behind the scenes with Werner Herzog as he guides a new generation of filmmakers.
+ This video about enshittification perfectly captures the frustration of the degrading internet.
+ NASA’s latest deep-space capture offers a rare view of planetary systems in their absolute infancy.
The culmination of a decade of development, TPU 8t and TPU 8i are custom-engineered to power the next generation of supercomputing with efficiency and scale.
NVIDIA and Google Cloud have collaborated for more than a decade, co‑engineering a full‑stack AI platform that spans every technology layer — from performance‑optimized libraries and frameworks to enterprise‑grade cloud services.
This foundation enables developers, startups and enterprises to push agentic and physical AI out of the lab and into production — from agents that manage complex workflows to robots and digital twins on the factory floor.
At Google Cloud Next this week in Las Vegas, the partnership reaches a new milestone, with advancements to expand Google Cloud AI Hypercomputer for AI factories that will power the next frontier of agentic and physical AI.
These include the new NVIDIA Vera Rubin-powered A5X bare-metal instances; a preview of Google Gemini on Google Distributed Cloud running on NVIDIA Blackwell and NVIDIA Blackwell Ultra GPUs; confidential VMs with NVIDIA Blackwell GPUs; and agentic AI on Gemini Enterprise Agent Platform with NVIDIA Nemotron open models and the NVIDIA NeMo framework.
At Google Cloud Next, Google announced A5X powered by NVIDIA Vera Rubin NVL72 rack-scale systems, which — through extreme codesign across chips, systems and software — deliver up to 10x lower inference cost per token and 10x higher token throughput per megawatt than the prior generation.
A5X will use NVIDIA ConnectX-9 SuperNICs, combined with next-generation Google Virgo networking, scaling to up to 80,000 NVIDIA Rubin GPUs within a single site cluster and up to 960,000 NVIDIA Rubin GPUs in a multisite cluster, enabling customers to run their largest AI workloads on NVIDIA‑optimized infrastructure.
“At Google Cloud, we believe the next decade of AI will be shaped by customers’ ability to run their most demanding workloads on a truly integrated, AI‑optimized infrastructure stack,” said Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud. “By combining Google Cloud’s scalable infrastructure and managed AI services with NVIDIA’s industry‑leading platforms, systems and software, we’re giving customers flexibility to train, tune and serve everything from frontier and open models to agentic and physical AI workloads — while optimizing for performance, cost and sustainability.”
Google Cloud’s broad NVIDIA Blackwell portfolio ranges from A4 VMs with NVIDIA HGX B200 systems to rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, all the way to fractional G4 VMs with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
Customers can right-size their acceleration capabilities, whether using multiple interconnected NVL72 racks that scale out to tens of thousands of NVIDIA Blackwell GPUs, a single rack that can scale up to 72 Blackwell GPUs with fifth-generation NVIDIA NVLink and NVLink 5 Switch, or just one-eighth of a GPU.
This comprehensive platform helps teams optimize every workload, from mixture-of-experts reasoning, multimodal inference and data processing to complex simulations for the next frontier of physical AI and robotics.
Leading frontier AI labs are already putting this infrastructure to work. Thinking Machines Lab is scaling its Tinker application programming interface (API) on A4X Max VMs with GB300 NVL72 systems to accelerate training, while OpenAI is running large‑scale inference on NVIDIA GB300 (A4X Max VMs) and GB200 NVL72 systems (A4X VMs) on Google Cloud for some of its most demanding inference workloads, including for ChatGPT.
Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now in preview on Google Distributed Cloud, so customers can bring Google’s frontier models wherever their most sensitive data resides.
NVIDIA Confidential Computing with the NVIDIA Blackwell platform enables Gemini models to run in a protected environment where prompts and fine‑tuning data stay encrypted and can’t be seen or altered by unauthorized parties, including the infrastructure operators.
In the public cloud, the preview of Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs brings these protections to multi‑tenant environments — helping safeguard prompts, AI models and data so customers in regulated industries can access the power of AI without compromising on security or performance.
This is the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud, giving Google Cloud customers a new foundation for secure, high‑performance AI.
The NVIDIA platform on Google Cloud is optimized to run every kind of model — from Google’s frontier Gemini and Gemma families to NVIDIA Nemotron open models and the broader open weight ecosystem — equipping developers to build agentic AI systems that reason, plan and act.
NVIDIA Nemotron 3 Super is available on Gemini Enterprise Agent Platform, giving developers a direct path to discovering, customizing and deploying NVIDIA‑optimized reasoning and multimodal models for agentic workflows.
Google Cloud and NVIDIA are also making it easier to train and customize open models at scale. Managed Training Clusters on Gemini Enterprise Agent Platform introduced a new managed reinforcement learning (RL) API built with NVIDIA NeMo RL for accelerating RL training at scale while automating cluster sizing, failure recovery and job execution, so teams can focus on agent behavior and model quality instead of infrastructure management.
Cybersecurity leader CrowdStrike uses NVIDIA NeMo open libraries such as NeMo Data Designer, NeMo Automodel and NeMo Megatron Bridge to generate synthetic data and fine-tuning Nemotron and other open large language models for domain-specific cybersecurity. Running on Managed Training Clusters on Gemini Enterprise Agent Platform with NVIDIA Blackwell GPUs, these capabilities accelerate threat detection, investigation and response.
Building industrial and physical AI at scale demands powerful hardware and a combination of open models, libraries and frameworks to develop these complex end-to-end workflows.
NVIDIA AI infrastructure, open models and physical AI libraries available on Google Cloud, is mainstreaming industrial and physical AI applications, enabling customers to simulate, optimize and automate real-world workflows.
Solutions from leading industrial software providers, including Cadence and Siemens Digital Industries Software, are now available on Google Cloud, accelerated on NVIDIA AI infrastructure. These applications are powering the next-generation design, engineering and manufacturing of everything from chips to autonomous vehicles, robotics, aerospace platforms, heavy machinery and large-scale production systems.
With NVIDIA Omniverse libraries and the open source NVIDIA Isaac Sim robotics simulation framework available on Google Cloud Marketplace, developers can build physically accurate digital twins and develop custom robotics simulations pipelines to train, simulate and validate robots before real-world deployment.
NVIDIA NIM microservices for models like NVIDIA Cosmos Reason 2 can be deployed to Google Vertex AI and Google Kubernetes Engine. This enables robots and vision AI agents to see, reason and act in the physical world like humans, powering use cases such as automated data curation and annotation, advanced robot planning and reasoning, and intelligent video analytics agents for real-time insights and decision-making.
Together, these technologies help developers seamlessly move from computer-aided design to living industrial digital twins and AI‑driven robots, accelerating processes from design sign‑off to factory optimization on the NVIDIA platform running on Google Cloud.
Global enterprises, AI labs and high‑growth startups are using NVIDIA and Google Cloud’s co-engineered platform to move from prototyping to production faster, including Snap, Schrödinger and Salesforce. Snap is cutting the cost of large‑scale A/B testing by shifting data pipelines to GPU‑accelerated Spark on Google Cloud. Schrödinger is shrinking weekslong drug discovery simulations into just hours with NVIDIA accelerated computing on Google Cloud.
Startups are orchestrating the next wave of AI innovation — building new agents and AI‑native applications using NVIDIA accelerated computing on Google Cloud.
As part of a broader ecosystem highlighted through NVIDIA Inception and Google for Startups, CodeRabbit and Factory are using NVIDIA Nemotron‑based models on Google Cloud to power code review and autonomous software development agents, while Aible, Mantis AI, Photoroom and Baseten are building enterprise data, video intelligence, generative imagery and managed inference solutions on the full‑stack NVIDIA platform on Google Cloud.
More than 90,000 developers have become a part of the joint NVIDIA and Google Cloud developer community in just over a year, tapping this platform to build and scale new AI applications.
In addition, NVIDIA has been honored at Next as Google Cloud Partner of the Year in two categories — AI Global Technology Partner and Infra Modernization Compute — in recognition of deep technical expertise and go-to-market alignment.
Together, NVIDIA and Google Cloud are giving customers a cloud‑scale platform to turn experimental agents and simulations into production systems that review code, secure fleets, enable new AI applications and optimize factories in the real world.
Learn more about the companies’ collaboration by attending NVIDIA sessions, demos and workshops at Google Cloud Next.
NASA's Curiosity rover has identified a diverse set of organic molecules on Mars, including a nitrogen-bearing compound similar in structure to DNA precursors. The finding strengthens the case that ancient organic material can survive in the Martian subsurface, though it does not prove past life because the compounds could also come from geology or meteorites. Phys.org reports: The study was led by Amy Williams, Ph.D., a professor of geological sciences at the University of Florida and a scientist on the Curiosity and Perseverance Mars rover missions. Curiosity landed on Mars in 2012 to find evidence that ancient Mars had conditions that could support microbial life billions of years ago; the Perseverance rover, which landed in 2021, was sent to look for signs of any ancient life that might have formed.
Among the 20-plus chemicals identified by the experiment, Curiosity spotted a nitrogen-bearing molecule with a structure similar to DNA precursors -- a chemical never before spotted on Mars. The rover also identified benzothiophene, a large, double-ringed, sulfurous chemical often delivered to planets by meteorites. "The same stuff that rained down on Mars from meteorites is what rained down on Earth, and it probably provided the building blocks for life as we know it on our planet," Williams said. The findings have been published in the journal Nature Communications.
Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey.

But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it.
Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics.
“AI is incredibly good at producing results,” he says. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.”
In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation.
Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure.
Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost.
Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions.
Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says.
“Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.”
In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed.
This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems.
The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic.
The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan.
The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful.
“It empowers confident, consistent decisions, and when these elements all come together, AI just doesn’t analyze and interpret the data — it drives smarter, faster decisions that really create business impact,” he says. “This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity.”
Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently.
Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation.
In the emerging era of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents can monitor events, trigger workflows, and make decisions in real time, often without direct human intervention. That speed creates new opportunities, but it also raises the stakes. When multiple agents operate across finance, supply chain, procurement, or customer operations, they must be guided by the same understanding of business priorities.
Without a common knowledge layer connecting disparate data together, coordination between systems quickly breaks down. One system might optimize for margin, another for liquidity, and another for compliance, each working from a different slice of data.
Importantly, most enterprises already possess much of the knowledge needed to make this work, says Khan. Years of operational data, master data, workflows, and policy logic already exist across business applications — companies just need to make it accessible. Companies that deploy data fabrics gain greater trust in their data, with more than two thirds of enterprises seeing improved data accessibility, data visibility, and exerting more control over their data.
“The opportunity isn’t just inventing context from scratch, it’s activating and connecting the context across your business that already exists,” he continues, adding that a data fabric is the “architecture that ensures data semantics, business processes and policies are connected as a unified system across all the clouds.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-packed stretch of the city will be, if all goes well, a 25-minute train ride.
The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town.
These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day.



Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above.
Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.
I’ve been looking into Sony’s new PlayStation age-verification rollout for the UK and Ireland, and the part that stands out is how many normal features get tied to it.
If an adult account doesn’t complete verification, Sony says it can lose access to voice chat, messaging, parties, Discord voice chat, broadcasting to YouTube or Twitch, and some in-game communication features.
So this isn’t just a policy change sitting in a help page somewhere. It’s a good example of age checks turning into everyday product infrastructure.
What makes this interesting to me is that it changes the feel of the platform. Verification stops being a rare edge-case thing and starts acting more like a gate you pass through if you want the full social version of the product.
I get why companies are doing it, especially with pressure around online safety, but it also feels like a preview of a more verification-heavy internet where more basic features sit behind proof-of-age or proof-of-person systems.
Curious how people here see it:
Is this a reasonable tradeoff for safety?
Or does it feel like the start of mainstream platforms normalizing identity checks for standard features?
BrianFagioli writes: Mozilla says it used an early version of Anthropic's Claude Mythos Preview to comb through Firefox's code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.
The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them. "Computers were completely incapable of doing this a few months ago, and now they excel at it," says Mozilla in a blog post. "We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't."
The company concluded: "The defects are finite, and we are entering a world where we can finally find them all."
Brenda Schafer Kennedy, SM ’93, knows that sometimes the best medicine comes with four legs and fur. Kennedy is the chief veterinary and research officer for Canine Companions, a California-based, nationwide organization that provides assistance dogs at no cost to children, veterans, and adults with disabilities.
“The need is enormous: One in four people in the US has a disability. We have so many people who could benefit from these dogs,” Kennedy says.
While service dogs might be best known for guiding the blind, Canine Companions trains dogs to do such things as open doors for wheelchair users or alert deaf people to doorbells, fire alarms, and other key sounds. Its psychiatric service dogs help veterans suffering from post-traumatic stress disorder—waking them from nightmares, for example. To date, it’s paired more than 7,000 dogs with people in need.
It’s critical to ensure that every service dog placed is healthy, and Kennedy—a veterinarian—spearheads the organization’s efforts to breed dogs with that in mind. “We wouldn’t place a dog that might have a life-shortening or a significant medical issue that a person might have to manage,” she says.
Kennedy also takes the lead in developing tech to support Canine Companions’ work. She is a co-inventor of CanineAlert, a patented device that sends a signal to a dog’s collar so the dog can interrupt a nightmare when its owner’s heart rate spikes. The technology may soon expand to address daytime anxiety episodes.
“These dogs can really be not only life-transforming in terms of providing people with independence, but critically essential and even life-saving,” she says.
An animal lover since childhood, Kennedy earned her undergraduate degree from Northwestern University before coming to MIT for her master’s in biology. “I found incredible mentors at MIT,” she says, noting that she particularly enjoyed working with Professor Hazel Sive, whose lab studied African clawed frogs.
The research was fascinating, but Kennedy wanted to work in hands-on medicine, so she obtained her veterinary degree from Tufts University. She then spent 16 years in private practice. Today, she is delighted to combine animal care with research at Canine Companions. “I had a passion for doing something really mission-oriented,” she says. “I love the idea of helping people through the human-canine bond.”
In additional to providing service, dogs also offer something elemental, Kennedy says: “Dogs add unconditional love to the mix. They support emotional and mental health for people and can be bridges to the community.”
If you’ve been to an eye doctor and had an image taken of the inside of your eye, chances are good it was done with optical coherence tomography (OCT)—a technology invented by clinician-scientist David Huang ’85, SM ’89, PhD ’93, and now used in 40 million procedures per year.
OCT is a noninvasive technique used to produce detailed images of complicated biological tissues such as the retina and the plaques that can build up in coronary arteries. It maps the time-of-flight of light waves reflected from tissue and paints a high-resolution picture of internal structures.
“It uses infrared light that’s barely visible compared to the bright flash of fundus photography [another common method of eye imaging] and provides a lot more information—three-dimensional rather than two-dimensional information—at higher resolution,” Huang says. The discovery earned him and his co-inventors slots in the National Inventors Hall of Fame in 2025 as well as the Lasker Award and the National Medals of Technology and Innovation in 2023.
Huang didn’t expect to change the paradigm of eye imaging when he began studying electrical engineering as an undergraduate at MIT, but he was interested in using an engineering mindset to contribute to medical advancements. That, he thought, could be his way to follow in the footsteps of his father, who was a family practitioner.
OCT emerged from his work as an MD-PhD student in the Harvard-MIT Program in Health Sciences and Technology. While studying ultrafast lasers at MIT under James Fujimoto ’79, SM ’81, PhD ’84, the Elihu Thomson Professor of Electrical Engineering, Huang was tasked with using the lasers to improve various ophthalmological tasks, including measuring the thickness of the cornea and retina.
Huang thought an approach known as interferometry, which could measure the time of flight down to one quadrillionth of a second, could improve thickness measurements to micrometer resolution. Huang’s experiments revealed that the technique was able to detect very faint signals arising from fine internal structures within the retina. Fujimoto and Huang realized the potential for inventing a new type of imaging and enlisted the help of Eric Swanson, SM ’84, who was using interferometry for intersatellite communications at Lincoln Laboratory, to develop an OCT machine for biological applications. Huang tested the new machine on several types of tissues accessed through Harvard Medical School and found it particularly successful in imaging retinal and coronary artery samples. He and his colleagues published their initial findings in Science in 1991, establishing OCT as a new imaging modality.
“Because of our ability to form collaborations with medical doctors and the more advanced technologies that were easily accessible at Lincoln Lab and MIT, we were able to make this new imaging technology take off when other people who were exploring around the same area were not able to demonstrate imaging results,” he says.
After the groundbreaking invention, Huang finished his academic and medical training as an ophthalmologist while Fujimoto and Swanson formed a startup company to ensure that the device got into medical offices.
Over the next decades, Huang has continued to refine OCT for various applications. Today, as the director of research at Oregon Health and Science University’s Casey Eye Institute, he leads research groups exploring new ways to use OCT in techniques such as OCT angiography (imaging blood flow down to the capillary level) and OCT optoretinography (mapping the light response in retinal photoreceptor cells).
In addition to conducting research, he also sees patients and is the cofounder of GoCheck Kids, a digital platform for pediatric eye screening.
Huang credits his knack for innovation to his position at the nexus of diverse fields. “It’s hard for a pure medical doctor or a pure laser engineer to realize that there is an opportunity to invent a new device that solves a real problem in the clinic,” he says. “But it’s really easy when you have knowledge on both sides.”
Heat generated by electronic devices is usually a problem, but a team led by Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies, has found a way to use it for data processing that doesn’t rely on electricity.
In this analog computing method, input data is encoded not as binary 1s and 0s but as a set of temperatures based on the waste heat already present in a device. The flow and distribution of that heat through tiny silicon structures, designed by a physics-based optimization algorithm they developed, forms the basis of the calculation. Then the output is represented by the power collected at the other end.
The researchers used these structures to perform a simple form of matrix vector multiplication, the fundamental mathematical technique machine-learning models like large language models use to process information and make predictions. The results were more than 99% accurate in many cases.
The researchers still have to overcome many hurdles to scale up this computing method for modern deep-learning models, such as the challenges involved in tiling millions of these structures together. As the matrices become more complicated, the results also become less accurate, especially when there is a large distance between the input and output terminals.
But the technique could also have a more immediate use: detecting problematic heat sources and measuring temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that can currently take up space on a chip.
“Most of the time, when you are performing computations in an electronic device, heat is the waste product,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the work. “You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself.”
Around 2.3 billion years ago, a pivotal period known as the Great Oxidation Event set the evolutionary course for oxygen-breathing life on Earth. But MIT geobiologists and colleagues have found evidence that some early forms of life evolved the ability to use oxygen hundreds of millions of years before that.
By mapping enzyme sequences from several thousand modern organisms onto an evolutionary tree of life, the researchers traced the origins of an enzyme that enables organisms to use oxygen to the Mesoarchean period, 3.2 to 2.8 billion years ago.
The team’s results may help explain a longstanding puzzle in Earth’s history: Given that the first oxygen-producing microbes likely emerged before the Mesoarchean, why didn’t oxygen build up in the atmosphere until hundreds of millions of years later? Having evolved the key enzyme, organisms living near those microbes, called cyanobacteria, may have gobbled up the small amounts of oxygen they produced.
“This does dramatically change the story of aerobic respiration,” says Fatima Husain, SM ’18, PhD ’25, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS) and a coauthor with Gregory Fournier, an associate professor of geobiology, of a paper on the research. “It shows us how incredibly innovative life is at all periods in Earth’s history.”
How does the physical matter in our brains translate into thoughts, sensations, and emotions? It’s hard to explore that question without neurosurgery. But in a recent paper, MIT philosopher Matthias Michel, Lincoln Lab researcher Daniel Freeman, and colleagues outline a strategy for doing so with an emerging tool called transcranial focused ultrasound.
This noninvasive technology reaches deeper into the brain, with greater resolution, than techniques such as EEG and MRI. It works by sending acoustic waves through the skull to focus on an area of a few millimeters, allowing specific brain structures to be stimulated so the effects can be studied.
The researchers lay out an experimental approach that would use the tool to help test two competing conceptions of consciousness. The “cognitivist” concept holds that brain activity generating conscious experience must involve higher-level processes such as reasoning or self-reflection, likely using the frontal cortex. The “non-cognitivist” idea is that specific patterns of neural activity—more localized in subcortical structures or at the back of the cortex—give rise to subjective experiences directly.
“This is a tool that’s not just useful for medicine, or even basic science, but could also help address the hard problem of consciousness,” Freeman says. “It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.”
Embedded in the body’s mucosal surfaces, proteins called lectins bind to sugars found on cell surfaces. A team led by MIT chemistry professor Laura Kiessling has found that one such protein, intelectin-2, both helps fortify the mucosal barrier and offers broad-spectrum protection against harmful bacteria found in the GI tract.
Intelectin-2 binds to a sugar molecule called galactose that is found on bacterial membranes, the team found, trapping the bacteria and hindering their growth; the trapped microbes eventually disintegrate, suggesting that the protein is able to kill them by disrupting their cell membranes. It also helps strengthen the intestine’s protective lining by binding to the galactose in the mucins that make up mucus.
“What’s remarkable is that intelectin-2 operates in two complementary ways. It helps stabilize the mucus layer, and if that barrier is compromised, it can directly neutralize or restrain bacteria that begin to escape,” says Kiessling, who conducted the study with colleagues including Amanda Dugan, a former MIT postdoc and research scientist, and Deepsing Syangtan, PhD ’24.
Because intelectin-2 can neutralize or eliminate pathogens such as Staphylococcus aureus and Klebsiella pneumoniae, which are often difficult to treat with antibiotics, it could someday be adapted as an antimicrobial agent, the researchers say. Restoring desirable levels of intelectin-2 could also help people with disorders such as inflammatory bowel disease, who may have either too little of it (potentially weakening the mucus barrier) or too much (killing off beneficial gut bacteria).
“Harnessing human lectins as tools to combat antimicrobial resistance opens up a fundamentally new strategy that draws on our own innate immune defenses,” Kiessling says. “Taking advantage of proteins that the body already uses to protect itself against pathogens is compelling and a direction that we are pursuing.”
An anonymous reader quotes a report from Ars Technica: Framework has been selling and shipping its modular, repairable, upgradable Laptop 13 for five years now, and in that time, it has released six distinct versions of its system board, each using fresh versions of Intel and AMD processors (seven versions, if you count this RISC-V one). The laptop around those components has gradually gotten better, too. Over the years, Framework has added higher-resolution screens in both matte and glossy finishes, a slightly larger battery, and other tweaked components that refine the original design. But so far, all of those parts have been totally interchangeable, and the fundamentals of the Laptop 13 design haven't changed much.
That changes today with the Framework Laptop 13 Pro, which, despite its name, is less an offshoot of the original Laptop 13 and closer to a ground-up redesign. It includes new Core Ultra Series 3 chips (codenamed Panther Lake), Framework's first touchscreen, a new black aluminum color option, a larger battery, and other significant changes. And while it sacrifices some component compatibility with the original Laptop 13, displays and motherboards remain interchangeable, so Framework Laptop owners can buy the new Core Ultra board and owners of older Framework Laptop boards can pop one into a Pro to benefit from the new battery and screen. At 1.4kg (about 3 pounds), the Laptop 13 Pro is slightly heavier than the Laptop 13's 1.3kg, but it still stacks up well against the 14-inch M5 MacBook Pro (1.55kg, or 3.4 pounds).
The Framework Laptop Pro will start at $1,199 for a DIY edition with a Core Ultra 5 325 processor, and no RAM, SSD, or operating system. A prebuilt version with Ubuntu Linux installed will start at $1,499, and Windows 11 will cost another $100 on top of that. A Core Ultra X7 358H version starts at $1,599 for a DIY edition, and a "limited batch" Core Ultra X9 388H version starts at $1,799. A bare motherboard with the Core Ultra 5 325 starts at $449, while a Core Ultra X7 358H board will cost $799. Pre-orders are available now, and begin shipping in June.

Even astronauts need to level up their laptops once in a while - including the crew of Expedition 74 on board the ISS, which NASA announced last week is in the process of some computer upgrades. According to NASA, the crew met on Friday to review plans to "first replace network servers then activate their new, more powerful laptop computers." In a statement to The Verge, NASA spokesperson Joshua Finch confirmed the new laptops the astronauts will be using: "The International Space Station Program has selected the HP ZBook G9 Mobile Workstation as the next laptop for the space station."


According to HP, the custom ZBook Fury G9 …
https://www.axios.com/2026/04/20/alex-bores-ai-dividend-plan-wealth
"Alex Bores, a Democratic House candidate in New York and a top target of AI super PACs, is rolling out a plan to create an "AI dividend" in response to potential large-scale job displacement from artificial intelligence.
Bores' plan, shared exclusively with Axios, comes as AI super PACs ramp up spending against his campaign.
What they're saying: "You don't take out fire insurance because you expect your house to burn down — you have insurance in case something goes awry," Bores told Axios in an interview."

OpenAI is rolling out the latest version of its AI-powered image generator with new "thinking capabilities," allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more "sophisticated" images, with improvements to its ability to follow instructions, preserve details of your choosing, and generate text.
It's powered by OpenAI's new GPT Image 2 model, with new thinking capabilities available to ChatGPT Plus, Pro, Business, and Enterprise subscribers. When a thinking model is selected, the chatbot's image generator can pull information from the w …

YouTube is expanding its AI deepfake monitoring feature to Hollywood - meaning some celebrity AI videos could soon disappear.
The platform's likeness detection feature searches YouTube for AI deepfake content and flags it for public figures enrolled in the program. Public figures can use it to keep track of AI content on YouTube of themselves or request removal (takedowns are evaluated against YouTube's privacy policy, and not every request will be approved). YouTube began testing the feature with content creators last fall; in March, the company expanded the program to politicians and journalists. YouTube says the tool will cover celebriti …
As AI agents increasingly work alongside humans across organizations, companies could be inadvertently opening a new attack surface. Insecure agents can be manipulated to access sensitive systems and proprietary data, increasing enterprise risk.
In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and a fortified security foundation are therefore critical.
According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years. Yet only one in five (21%) reports having a mature model for governance of autonomous agents. Executives are most concerned with data privacy and security (73%); legal, intellectual property, and regulatory compliance (50%); followed closely by governance capabilities and oversight (46%).
Enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure. What is needed is a robust control plane that governs, observes, and secures how AI agents, as well as their tools and models, operate across the enterprise.
“A control plane is the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools,” according to Andrew Rafla, principal, Deloitte Cyber Practice.
“Without a true control plane, you don’t really have the ability to scale agents autonomously—you just have unmanaged execution, and that comes with a lot of risk,” he says. “If you can’t answer what an agent did, on whose behalf, using what data, under what policy—and whether you can reproduce or stop it—you don’t have a functional control plane.”
Governance must make those answers obvious, not aspirational, he says. Governance is what turns AI pilots into production use cases. It’s the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation.
Without governance, agent deployments don’t fail safely. They fail unpredictably and at scale.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
An apple a day keeps the doctor away. Granted, 19th-century proverb writers were talking about the fruit, but Tim Cook helped give new meaning to the adage with the release of the very first Apple Watch. In fact, I'd argue that when he hands the reins to John Ternus in September, it won't be iPhones, Macs, AirPods, or the Vision Pro that defines Cook's legacy. It'll be how the Apple Watch set the course for modern health tech.
You don't have to take my word for it. In 2019, Cook himself told told Mad Money host Jim Cramer, "…If you zoom out into the future, and you look back, and you ask the question, 'What was Apple's greatest contribution …
The IEA says 2025 marked a turning point for global energy, with solar posting the largest growth ever seen for any energy source and helping carbon-free power outpace rising demand. The trend led the agency to declare that the world has entered the "Age of Electricity." Ars Technica reports: The IEA report covers energy use, including the electrical grid, transportation, home heating, and other forms of consumption. As such, it can track how some of those uses are shifting, as electric vehicles displace some gasoline use and heat pumps replace gas and oil heating. It also saw a more global trend: The demand for electricity grew at twice the rate of overall energy demand. All of these went into the conclusion that we're starting the Age of Electricity. In terms of specifics, the IEA saw electric vehicle demand rise by nearly 40 percent, with electric car sales being a quarter of the total of cars sold last year. While that's having a measurable effect on electricity demand, it remains relatively small at the moment. It's almost certain to be contributing to the size of the rise in oil use last year: 0.7 percent. In absolute terms, that's less than half the average rise of the previous decade.
[...] When it comes to supplying electrons for those alternatives, the central story is solar power. "The absolute increase of solar PV generation in 2025 is the largest ever observed for any source," the IEA says, "excluding years marked by rebounds from global economic shocks such as COVID-19." In other words, with nothing in particular driving the energy markets in 2025, Solar's growth was unprecedented. On its own, its growth covered a quarter of the rising demand for all forms of energy. If you limit it to electricity, increased solar production covered over two-thirds of the increased demand. Overall, solar generated over 2,700 terawatt-hours last year, more than double its output from three years earlier. It now accounts for over 8 percent of the world's total electricity production. Thirty individual countries installed at least a gigawatt of solar last year, and it is now the single largest grid source by capacity (though other sources still outproduce it at the moment).
An anonymous reader quotes a report from Denver7: Maryland is poised to become the first state in the country to ban "surveillance pricing." The practice refers to companies using a shopper's personal data, such as browsing history, location, or purchasing behavior, to tailor prices to individual customers. The Protection From Predatory Pricing Act, passed this month and sent to the governor for a signature, would prohibit food retailers and third-party delivery services from using the practice. Violations would be treated as deceptive trade practices under state law, with potential fines and lawsuits. While Consumer Reports called the move "encouraging," it warned that the final version contains "loopholes" that don't fully protect consumers. Some of the exemptions noted in the report include "applying the ban only to the use of personal data to set higher prices without establishing a baseline or standard price; exempting pricing tied to loyalty or membership programs, even if prices are higher; and exempting pricing linked to subscriptions or subscription-based services."
With growing focus on the existential threat quantum computing poses to some of the most crucial and widely used forms of encryption, cryptography engineer Filippo Valsorda wants to make one thing absolutely clear: Contrary to popular mythology that refuses to die, AES 128 is perfectly fine in a post-quantum world.
AES 128 is the most widely used variety of the Advanced Encryption Standard, a block cipher suite formally adopted by NIST in 2001. While the specification allows 192- and 256-bit key sizes, AES 128 was widely considered to be the preferred one because it meets the sweet spot between computational resources required to use it and the security it offers. With no known vulnerabilities in its 30-year history, a brute-force attack is the only known way to break it. With 2128 or 3.4 x 1038 possible key combinations, such an attack would take about 9 billion years using the entire bitcoin mining resources as of 2026.
Over the past decade, something interesting happened to all that public confidence. Amateur cryptographers and mathematicians twisted a series of equations known as Grover’s algorithm to declare the death of AES 128 once a cryptographically relevant quantum computer (CRQC) came into being. They said a CRQC would halve the effective strength to just 264, a small enough supply that—if true—would allow the same bitcoin mining resources to brute force it in less than a second (the comparison is purely for illustration purposes; a CRQC almost certainly couldn’t run like clusters of bitcoin ASICs and more importantly couldn’t parallelize the workload as the amateurs assume).
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
As human society has expanded, animals have started struggling to hear one another. For many birds, the noise has grown so loud that they’ve begun to sing with faster trills. Now, their mating calls aren’t as effective.
The growing hubbub can also increase bird-on-bird conflict, and entire species that can’t handle urban clamor simply leave town for good. But there are technological solutions to the noises hurting animals—and they could help humans, too.
—Clive Thompson
In May, a new subway segment will connect downtown Los Angeles to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-packed stretch of the city will be, if all goes well, a 25-minute train ride.
The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. Find out why.
—Adam Rogers
Both of these stories are from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands tomorrow.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Apple’s Tim Cook is stepping down as CEO
Hardware chief John Ternus will take over from him in September. (CNN)
+ Ternus’ defining challenge may be fixing Apple’s AI strategy. (CNBC)
+ How does Cook compare with Apple’s other CEOs through the years? (NYT $)
2 Anthropic’s new Amazon deal escalates the compute war with OpenAI
Anthropic will spend more than $100 billion on Amazon compute.(Axios $)
+ OpenAI touted its compute advantage over Anthropic two weeks ago. (Bloomberg $)
+ Here’s why the AI compute explosion has only just begun. (MIT Technology Review)
3 Silicon Valley is trying to get into the news business
The latest addition is Andreessen Horowitz’s MTS. (The Information $)
+ OpenAI recently bought a business talk show. (NPR)
+ They join Elon Musk’s X and a new Peter Thiel-backed startup. (Axios)
4 The banking industry is scrambling to get access to Anthropic’s Mythos
As regulators review the risks to financial services. (Reuters $)+ Germany’s central bank has called for wider access to Mythos. (Bloomberg $)
5 War memes are turning conflict into content
Fueled by recommendation systems designed to keep you hooked. (Wired $)
+ AI is turning the Iran conflict into theater. (MIT Technology Review)
6 AI is boosting worker productivity, but not their paychecks
Employees aren’t financially benefiting from their extra efficiency. (Quartz)
+ New data sheds light on the current state of AI. (MIT Technology Review)
7 Amazon’s ambition to rival Starlink has hit a setback
After a Blue Origin rocket was grounded. (FT $)
8 Jeff Bezos’s AI lab has neared a $38 billion valuation
In an imminent $10 billion fundraising deal from investors. (FT $)
+ The startup focuses on AI for engineering and manufacturing. (Reuters $)
9 Scientific AI agents have got their own social network
Where they share, debate, and discuss research papers. (Nature)
10 A Mars rover has discovered new “origin-of-life” molecules
They suggest Mars wasn’t always a lifeless red desert. (Gizmodo)
Quote of the day
One More Thing

There is more stuff being created now than at any time in history, but our data is more fragile than ever. One day in the future, YouTube’s videos may permanently disappear. Facebook—and your uncle’s holiday posts—will vanish.
For many archivists, alarm bells are ringing. Across the world, they’re scraping up defunct websites, saving at-risk data collections, and developing data storage technologies that could last thousands of years.
Their work raises complex questions. What is important to us? How do we decide what to keep—and what do we let go? Read our story on the thorny problems of digital preservation.
—Niall Firth
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ Apple’s forgotten co-founder recently shared his story of the company’s early days.
+ Witness a rare underwater volcanic eruption in the Solomon Islands.
+ Learn what makes Shakespeare’s writing so effective in this masterful analysis.
+ An Artemis II astronaut shared a stunning iPhone video showing Earth disappear behind the Moon at 8x zoom.
On Monday, the International Energy Agency released its analysis of the energy trends of 2025, covering the entire globe. It confirms and extends the primary conclusion of a more limited analysis by the International Renewable Energy Agency: 2025 was the first year of solar's dominance. Increased solar production was a key reason the growth of carbon-free energy sources outpaced rising demand.
Coupled with a massive growth in battery storage and relatively stagnant fossil fuel use, the year has led the IEA to declare that "the world has entered the Age of Electricity."
The IEA report covers energy use, including the electrical grid, transportation, home heating, and other forms of consumption. As such, it can track how some of those uses are shifting, as electric vehicles displace some gasoline use and heat pumps replace gas and oil heating. It also saw a more global trend: The demand for electricity grew at twice the rate of overall energy demand. All of these went into the conclusion that we're starting the Age of Electricity.

Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for "getting things done." The move, one of several AI-focused updates in recent months, is part of a broader industry push to make AI more relevant and practically useful to consumers while turning huge troves of user-generated data into a competitive edge.
In a press release, Yelp says the Yelp Assistant chatbot will be at "the center of the app experience," where it can answer questions, make recommendations, and even handle bookings in a single conversation. The bot will be availa …
Amazon is expanding its Anthropic partnership with a deal to invest up to another $25 billion, while Anthropic commits to spending more than $100 billion on AWS infrastructure over the next decade to power Claude. "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI," Amazon CEO Andy Jassy said in a statement. CNBC reports: Amazon's investment includes $5 billion into Anthropic now, with up to $20 billion in the future tied to "certain commercial milestones," according to a release. The initial investment is at Anthropic's latest valuation of $380 billion. Anthropic said in the release that it will bring nearly 1 gigawatt total of Trainium2 and Trainium3 capacity online by the end of the year.
With all of the major hyperscalers competing to build out AI capacity as quickly as possible, Amazon said in February that it expects to shell out roughly $200 billion this year on capital expenditures, mostly on AI infrastructure.