This Week in AI: July 22, 2025
It’s been a landmark week in the world of artificial intelligence—one marked by escalating rivalries, paradigm-shifting innovations, and fresh signs of economic realignment. From OpenAI’s rumored browser poised to challenge Google’s search empire to the rise of AI video tools redefining creative workflows, the momentum across sectors is undeniable. Nvidia’s continued dominance, new breakthroughs in user control and spatial intelligence, and the evolving role of AI in the workplace highlight both the speed and complexity of this transformation. This week’s coverage captures the tensions, opportunities, and questions driving the next chapter of the AI era.

Meta’s Prometheus and Hyperion data centers
Executive Summary
In a stunning show of scale and urgency, Meta is building two superintelligence-focused data centers—Prometheus (Ohio, 1 GW) and Hyperion (Louisiana, up to 5 GW)—designed to power its future AI models. These “Manhattan-sized” projects are being assembled at breakneck speed, sometimes in tents, to outpace traditional infrastructure timelines. Meta is sourcing energy from every direction: solar, wind, geothermal, nuclear, and even controversial fossil fuels, including new gas plants. These projects represent a shift from open-source to closed-source models, potentially sidelining U.S. leadership in open AI innovation. Meta’s energy strategy mirrors moves by Amazon, Microsoft, and Google, all of whom are securing nuclear partnerships or restarting dormant reactors to power AI compute clusters.
But there’s a catch: AI’s massive and erratic energy demands are now threatening grid stability. Grid operators warn that surging power spikes from AI training could crash local and national systems. PJM, the largest U.S. grid, has seen 800% price spikes and warned of summer shortages in high-demand areas like Pennsylvania and Virginia. In parallel, Elon Musk’s xAI is experimenting with batteries and mobile gas turbines to smooth load swings, but even that approach raises environmental red flags. Meanwhile, China is rapidly outpacing the West in solar deployment and nuclear expansion—powering its AI boom with infrastructure the U.S. lacks. The race to superintelligence is colliding head-on with the physical limits of America’s aging grid.
Relevance for Business
AI is no longer just about algorithms—it’s now a battle over electricity, infrastructure, and geopolitics. As Big Tech carves out energy reserves for their AI ambitions, businesses of all sizes will face higher electricity costs, supply volatility, and greater competition for clean power. U.S. regulators are trying to catch up, but the market reality is clear: those who secure stable energy access and efficient compute infrastructure will gain a decisive edge. This shift has major implications for data center contracts, sustainability targets, and long-term digital transformation strategies.
Calls to Action for Executives & Managers
- 🔌 Audit Energy Risk in AI Supply Chains: Assess how dependent your operations are on AI platforms that may be exposed to power constraints or price volatility.
- ⚡ Consider Hybrid Infrastructure Strategies: Explore on-prem or regional cloud options that prioritize energy resilience and sustainability.
- 🌱 Partner on Renewable Energy Initiatives: Collaborate with energy providers or invest in renewable credits to stabilize long-term costs and support grid capacity.
- 📉 Track Emerging AI Efficiency Tools: Monitor breakthroughs in efficient model architectures to reduce compute requirements and lower power draw.
- 🌍 Watch Global Power Trends: China’s AI energy playbook (solar + nuclear) may influence pricing, supply chain competitiveness, and regulatory responses in the West.
- 🛠️ Plan for AI Infrastructure Inflation: Budget for increased costs tied to AI compute, energy, and sustainability reporting starting in 2026 and beyond.
Here’s what leading outlets are reporting about Meta’s massive AI data‑center investment:
Reuters reports that Mark Zuckerberg announced Meta will invest “hundreds of billions of dollars” to build multiple multi‑gigawatt AI data centers—Prometheus (1 GW, Ohio, online 2026) and Hyperion (scalable up to 5 GW in Louisiana)—aimed at powering its Superintelligence Labs. The move is backed by Meta’s strong ad‑business profit (~$165 b in 2024 revenue), while addressing deployment pace, green energy access, and leadership in compute‑intensive AI (Reuters).
IT Pro highlights that the initiative involves temporary “tented” GPU clusters to fast‑track capacity, with capex topping $68 billion over 18 months and an additional $29 b raised. But they also raise environmental and grid‑strain concerns—citing power/water demands and ESG trade‑offs (IT Pro).
Barron’s emphasizes the strategic scale and talent acquisition: hundreds of millions to attract AI experts from Google and OpenAI, and aiming for million‑GPU clusters by 2027. They note Meta’s 2025 capex could reach $64–72 b, adding to sector-wide data‑center capex projected to hit $1.7 trillion by 2035 (Barron’s).
Fast Company dives into the “tent strategy” too—designed for rapid deployment, weather‑proof and hurricane‑resistant, with Hyperion expected to draw 2 GW by 2030, scaling to 5 GW later (Fast Company).
Business Insider frames this as an urgent “arms race”: an accelerated push after Llama 4 underperformed, with top talent lured by seven‑figure deals, and the tented approach signaling extreme speed and scale (businessinsider.com).
🧭 Synthesis
Across reputable outlets, the narrative converges:
- Scale & Urgency: Meta is building multi‑GW “superclusters” (Prometheus, Hyperion) to dominate AI compute infrastructure—even using tented modules for accelerated deployment.
- Financial & Talent Commitment: Hundreds of billions in capex, record hiring from top AI firms, and scaling toward million‑GPU clusters by 2027.
- Environmental & Infrastructural Risks: Massive power and water demand raising serious grid, ESG, and regulatory concerns.
- Competitive & Strategic Positioning: Aims to bid for leadership in superintelligence, pressuring peers like OpenAI, Google, Microsoft, AWS to escalate their own infrastructure plays.
Let me know if you’d like a deeper dive into any of these aspects—or want quotes, technical visuals, or links to these outlets.

OpenAI’s New ChatGPT Agent Might’ve Just Stolen Your Job
Week of July 22, 2025
OpenAI unveiled a powerful new ChatGPT Agent, combining their Deep Research and Operator tools into a cloud-based assistant capable of multi-step tasks—like planning events, retrieving wardrobe recommendations, optimizing travel logistics, and generating spreadsheets. Unlike previous tools, the Agent performs autonomously over long sessions, pausing only when sensitive input or critical decisions are required. Sam Altman emphasized safety and user control in his launch remarks, though the potential to displace entry-level jobs has sparked concern.
Meanwhile, OpenAI’s coding agent nearly won a global programming competition, showcasing the rapidly closing gap between top-tier human talent and autonomous AI. This adds to a growing trend where creative, technical, and administrative roles are increasingly augmented—or replaced—by AI systems. Complementing this was an upgrade to OpenAI’s image editing model, now capable of near-Photoshop-level precision in modifying specific image regions without redrawing entire scenes.
Meta made headlines by poaching OpenAI talent, investing heavily in compute infrastructure, and possibly moving away from open-source AI with LLaMA. The Grok team, meanwhile, leaned into virality with the controversial launch of anime “waifu” chatbots, raising red flags about AI’s influence on human relationships. On the creative side, Runway launched Act-Two, a video-generation tool that captures full-body and facial expressions, marking a breakthrough for solo creators, marketers, and performers.
Elsewhere, China introduced Kimi K2, an open-source model rivaling GPT-4-level tools in creative writing and code generation. Higgsfield dropped a new VFX pack, including surreal effects like fire breath and decapitation, while UBTech’s Walker 2 robot stunned audiences with self-swapping batteries. Closing out the week, Suno 4.5 enabled users to turn voice or instrument input into complete songs, advancing the vision of AI-generated streaming music. Gwern’s new essay—proposing that LLMs should be allowed to “dream”—offered a philosophical counterpoint to this rapid industrial progress.
Relevance for Business Leaders
- AI agents are now capable of completing complex, multi-step tasks autonomously, signaling an inflection point for knowledge work, logistics, and administrative support.
- OpenAI’s near-win in a professional coding competition highlights serious disruption potential in technical roles, especially at the entry level.
- Tools like Runway’s Act-Two and Suno 4.5 empower solo marketers and creators, democratizing video and music content generation with minimal expertise or budget.
- Meta’s aggressive recruitment and infrastructure building, along with OpenAI’s safety disclosures, reinforce that AI innovation is now a geopolitical and competitive race, with risks and rewards accelerating in tandem.
Calls to Action
- Test ChatGPT Agent (if you’re a Pro or Plus user) to evaluate its capabilities for your business operations. Identify processes that could be offloaded or streamlined.
- Review your workforce strategy—especially roles involving research, planning, or repetitive analysis—and begin upskilling teams in AI collaboration and oversight.
- Integrate creative AI tools (like Runway or Suno) into your marketing pipeline for low-cost content creation, A/B testing, and brand engagement.
- Monitor employee and consumer AI use cases for ethical implications, particularly around AI companions and data security in agent-based workflows.
- Stay informed on open-source model developments like Kimi K2, which may offer cost-effective, customizable alternatives to proprietary tools.

LLM DayDreaming
LLM Daydreaming (Gwern.net)
Author: Gwern Branwen | Published: July 12–14, 2025
Summary
Despite their impressive performance, Large Language Models (LLMs) have not yet produced truly novel insights or breakthroughs. Gwern argues this is due to missing faculties such as the ability to learn continuously, think in the background, and simulate a “default mode network” like the human brain. To bridge this cognitive gap, he proposes a “Day-Dreaming Loop” (DDL)—an algorithmic background process where an LLM randomly samples pairs of ideas, generates creative combinations, and filters for insights worth remembering. Though computationally expensive, such wasteful-seeming activity may be critical for producing genuine novelty and innovation.
Relevance for Business
- LLMs may not yet be good R&D partners because they lack persistent memory and offline creative processing—key traits that enable human insight.
- Gwern’s proposed DDL model could guide the next generation of “thinking” AI systems, useful in sectors like product innovation, drug discovery, or strategic forecasting.
- Companies could begin investing in “daydreaming AIs” to create proprietary datasets that form a competitive moat—a key strategy as open-source models proliferate.
Calls to Action
- Tech executives and AI leads should consider funding research into continuous learning and spontaneous creativity in AI.
- Enterprise AI teams might explore ways to integrate offline idea generation mechanisms (e.g., using internal memory graphs and concept recombination).
- Investors should look for startups building “thinking machines” with background novelty search capabilities—not just response optimization.
- Content-based industries (media, pharma, design) can prepare for a future where only the most novel outputs—generated from “expensive thinking”—hold unique economic value.

Why AI-powered hiring may create legal headaches
by Chris Stokel-Walker (Fast Company):
AI tools like ChatGPT are increasingly being used to streamline hiring, but new research warns that they may unintentionally introduce bias and expose companies to legal risk. A recent study, published on Cornell’s arXiv, analyzed thousands of job applications and found that popular large language models from OpenAI, Anthropic, Google, and Meta often fail to deliver fair results across race and gender. While some models achieved gender parity, racial and intersectional fairness remained elusive—with impact ratios falling below legal thresholds. Management expert Stefan Stern cautions that overreliance on AI may erode trust with job candidates and damage company culture.
Relevance for Business
- Legal exposure is rising: AI-driven hiring decisions, especially those made using off-the-shelf tools, could lead to discrimination lawsuits if bias is proven.
- Brand and culture risk: Candidates may reject offers from companies that rely on impersonal or opaque AI screening, hurting recruitment efforts.
- Compliance pressure: Fairness metrics like impact ratio are now critical to monitor in HR tech adoption—especially in regulated industries or diverse hiring initiatives.
Calls to Action
Educate your HR teams: Provide training on algorithmic fairness, legal risk, and best practices in responsible AI use in recruitment.
Audit your hiring AI: Immediately review any AI tools or automation in use for candidate screening to assess bias and compliance with employment laws.
Maintain a human-in-the-loop: Use AI to augment, not replace, human decision-making—especially in final candidate evaluations.
Invest in EQ, not just AI: Promote emotional intelligence and ethical leadership across HR teams to strengthen trust and retention.
https://www.fastcompany.com/91365346/bosses-think-twice-before-letting-ai-make-hiring-decisions: July 22, AI Updates this week.
AI videos are tricking tourists into visiting places that don’t exist. That’s just the beginning
A Malaysian couple recently traveled hours to visit a breathtaking cable car attraction—only to discover it never existed, having been fabricated by AI video software like Google’s Veo3. This incident illustrates how generative AI is blurring the line between fact and fiction, especially on social media platforms like TikTok, where fake travel content is now widespread. With deepfake-related fraud up over 2,000% in three years, the risk of real-world consequences—financial, reputational, and emotional—is rapidly increasing. As AI-generated influencers and destinations proliferate, public trust in digital media continues to erode.
Author: Jesus Diaz, Fast Company
Relevance for Business
- Brand integrity is at stake: As AI-generated content becomes indistinguishable from reality, businesses in tourism, retail, and media must guard against association with fake or misleading material.
- Consumer trust is fragile: Erosion of visual credibility impacts marketing, customer experience, and reputational risk across industries.
- Liability risks rising: Companies unknowingly promoting or amplifying deepfakes or AI-fabricated content may face legal exposure or public backlash.
Calls to Action
- Audit AI-generated content in your digital marketing pipeline to ensure authenticity and transparency.
- Implement media literacy training for staff, especially in marketing and customer-facing roles, to spot and avoid synthetic deception.
- Create clear disclaimers when using AI-generated images or videos—maintaining transparency builds brand trust in a post-truth media landscape.
- Partner with verification platforms to flag or verify high-traffic content that may impact your brand or customer base.

The most effective AI tools for research, writing, planning and creativity
Jeremy Caplan highlights a curated list of AI tools that enhance productivity across research, writing, communication, and multimedia creation. From Perplexity’s citation-rich summaries to Descript’s natural-language-based video editing, these tools don’t just save time—they help users generate new ideas, polish content, and make better strategic decisions. Caplan encourages tactics like reverse interviews, AI-assisted planning, and feedback loops with tools like ChatGPT and Claude to elevate creative work. The key takeaway: AI is most powerful not when it replaces us, but when it collaborates with and sharpens human thinking.
Author: Jeremy Caplan, Fast Company / Wonder Tools
Relevance for Business
- Boosts team productivity by reducing time spent on research, content creation, and project planning.
- Improves creative output across roles—from marketers and analysts to educators and entrepreneurs—without requiring technical expertise.
- Enhances decision-making by surfacing blind spots and generating more comprehensive planning scenarios through AI collaboration.
Calls to Action
- Test tools like Perplexity, Descript, and Gamma within your team to explore efficiency gains in content development and presentations.
- Use AI for structured feedback by having it critique drafts, uncover gaps, or surface unseen insights.
- Try “reverse interviewing” and project planning in Claude or ChatGPT to extract team knowledge and improve strategic clarity.
- Educate teams on prompt strategies to unlock more nuanced, human-AI collaboration—not just task automation.

The era of free AI scraping may be coming to an end
Cloudflare has announced it will begin blocking AI crawlers by default, signaling a major shift in how internet content is protected from unauthorized scraping. As one of the internet’s biggest infrastructure providers—managing 20% of all traffic—Cloudflare’s move may give teeth to publisher demands for compensation from AI companies using their content. The company’s new “Pay Per Crawl” system offers a monetization model, potentially transforming bot traffic from a threat into an opportunity. This tipping point may help publishers take back control, even before new regulations or lawsuits reshape the landscape.
Author: Pete Pachal, Fast Company / Media Copilot
Relevance for Business
- Publishers and content creators gain leverage: AI crawlers may now be blocked or monetized, opening up new revenue opportunities and protection strategies.
- CDN-level defenses shift the power dynamics: Infrastructure providers like Cloudflare are giving websites tools to control access by AI bots.
- Search expectations are changing: As AI chatbots replace traditional search, businesses must rethink how they serve both human and bot audiences.
Calls to Action
- Review your robots.txt policies and CDN settings—ensure your site isn’t being scraped without consent or compensation.
- Explore participation in Pay Per Crawl models to monetize bot traffic ethically and transparently.
- Build bot-friendly, citation-ready content that ensures fair attribution and reinforces your brand in AI-generated summaries.
- Stay ahead of AI search trends by developing branded AI tools or experiences that retain visitors and deliver value beyond chatbot snippets.

OpenAI vs. Google could be the heavyweight battle of the half-century
OpenAI is reportedly preparing to launch its own AI-powered web browser—a direct challenge to Google’s dominance in search and ad revenue. With over 400 million ChatGPT users and the potential to integrate AI agents for tasks like booking and form-filling, the browser could threaten Google’s $200B ad empire and data pipeline. Meanwhile, fake AI-generated messages impersonating U.S. officials highlight growing national security concerns, and Ramp data suggests businesses are hitting AI subscription fatigue, shifting toward free or lower-tier tools. Even as spending dips, VC funding in AI startups continues at a record-breaking pace.
Author: Mark Sullivan, Fast Company
Relevance for Business
- OpenAI’s browser launch could disrupt the digital ad and search landscape, requiring companies to rethink SEO, user acquisition, and platform dependencies.
- AI impersonation threats are growing, reinforcing the need for robust security and identity verification in executive communications.
- Enterprise AI usage is maturing: businesses are prioritizing cost-effective tools and re-evaluating ROI on premium subscriptions.
- Massive venture investment in AI startups signals long-term industry transformation, with new competitors emerging across every vertical.
Calls to Action
- Reevaluate your search and advertising strategies in anticipation of AI-native platforms that may bypass traditional search engines.
- Implement deepfake detection protocols for leadership teams and sensitive accounts to prevent AI-based impersonation attacks.
- Audit AI tool usage across departments to identify cost-saving opportunities while maintaining productivity.
- Monitor AI startup ecosystems for emerging partnerships, acquisitions, or competitive threats that could reshape your market.

AI replaced me, so I decided to ride the AI wave
After losing his job to AI, Mark Quinn reframed the disruption as an opportunity—using tools like ChatGPT to streamline his job search, tailor applications, and generate strategic career insights. His approach led to a new role at a startup focused on AI-human collaboration, where he now applies what he learned to boost internal productivity. Rather than resisting the AI wave, Quinn advocates riding it—positioning AI as a co-pilot to amplify human value. His story offers a hopeful roadmap for professionals navigating a changing labor market.
Author: Mark Quinn, Fast Company
Relevance for Business
- AI can be a productivity amplifier, not just a labor replacement—especially when paired with human insight.
- Workforce transformation is inevitable: businesses need to help employees reskill and reframe their relationship with AI.
- Human-AI collaboration can create new roles—from prompt engineers to strategic AI program leads—across industries.
Calls to Action
- Encourage employees to explore AI tools for personal productivity, creativity, and career development.
- Invest in internal upskilling by offering training on AI collaboration, prompting, and strategy.
- Reframe layoffs as workforce transitions, supporting displaced employees with AI-assisted job search resources.
- Cultivate a culture of AI adoption that positions technology as an enabler of human potential—not a replacement.

These Pixar and Apple alums want to change the way you create generative AI video
A team of ex-Pixar, Apple, Google, and Unity veterans have launched Intangible, a 3D web-based AI video tool that aims to solve one of generative AI’s biggest challenges: creative control. Unlike text-only prompting tools, Intangible introduces spatial intelligence—allowing users to manipulate 3D environments with drag-and-drop objects, real camera controls, and scene-building logic. While still rough around the edges and reliant on third-party engines like Kling for final rendering, Intangible represents a leap toward a more intuitive, visually driven AI video experience.
Author: Jesus Diaz, Fast Company
Relevance for Business
- Visual industries like film, advertising, gaming, and virtual events may gain significant efficiency through tools like Intangible, reducing pre-production time.
- AI’s next frontier is UX: solutions that blend intuitive design and spatial interfaces will be crucial for AI adoption in creative workflows.
- Democratization of video production could expand who can participate in high-quality media creation—lowering the technical barrier for marketers, SMBs, and educators.
Calls to Action
- Creative departments should explore early access to tools like Intangible to evaluate fit for storyboarding, prototyping, and content marketing.
- Reframe AI video as a collaborative process—blend human scene design with AI rendering to maintain creative control.
- Monitor developments in spatial intelligence as it becomes foundational to the next wave of generative media tools.
- Plan training for non-technical staff to use emerging no-code AI visual tools, enabling broader participation in content creation.

5 companies that could hit a $4 trillion market cap after Nvidia
Nvidia has become the first company in history to surpass a $4 trillion market cap, driven by global demand for AI infrastructure. Hot on its heels are Microsoft ($3.7T) and Apple ($3.1T), followed by Amazon ($2.3T), Alphabet/Google ($2.1T), and Meta ($1.8T), all in striking distance of the $4T mark depending on market momentum and AI-driven growth. Notably, Nvidia reached this milestone just two years after hitting $1T—underscoring the accelerated wealth creation enabled by AI leadership. With current trends, a $10 trillion company could emerge before 2030.
Author: Michael Grothaus, Fast Company
Relevance for Business
- AI market leadership is driving exponential valuation gains, making AI infrastructure a top-tier investment and strategic focus.
- Enterprise partnerships and supplier relationships with Big Tech AI leaders (like Nvidia, Microsoft, and Apple) will gain urgency and value.
- Capital markets are rewarding AI-native growth, influencing board-level decisions on R&D, M&A, and innovation initiatives.
Calls to Action
- Reassess your vendor and tech ecosystem: Align with companies leading in AI hardware, cloud, and platforms.
- Invest in AI adoption strategies to ride the market waves being rewarded by capital markets.
- Track public market dynamics: Use Big Tech AI trends as a barometer for broader sector opportunities and risks.
- Consider scenario planning for a world where trillion-dollar valuations are commonplace—what new power dynamics will emerge?

Hardware Is Eating the World
After decades of software dominance, hardware is re-emerging as a critical force in the AI era, driven by the demand for AI-embedded PCs, edge devices, and data centers. Innovations like neural processing units (NPUs) are enabling offline AI functionality, which enhances privacy, speed, and efficiency—especially when processing is done closer to the data source. Enterprise infrastructure, long treated as a utility, is now a strategic differentiator, with major shifts in build-vs-buy decisions, GPU utilization, and smart robotics integration. This resurgence positions AI-ready hardware as foundational to future business competitiveness.
Authors: Kelly Raskovich, Bill Briggs, Mike Bechtel, and Abhijith Ravinutala (Deloitte Consulting LLP)
Published via WSJ Risk & Compliance, July 17, 2025
Relevance for Business
- Enterprise hardware refresh cycles are becoming mission-critical as AI moves from the cloud to the edge.
- On-device AI capability reduces latency, enhances data security, and lowers cloud dependency.
- Build vs. buy decisions around GPUs and AI compute infrastructure now shape operational agility and cost efficiency.
- Smart robotics and IoT proliferation signal that hardware is not just back—but central to the next wave of digital transformation.
Calls to Action
- Audit your infrastructure for AI readiness: Evaluate the age and NPU capability of enterprise PCs and edge devices.
- Adopt a tiered rollout strategy: Prioritize hardware upgrades where AI ROI is highest.
- Balance cloud with edge: Explore localized processing for sensitive data to optimize performance and security.
- Plan for robotics integration: Start experimenting with task-specific bots and prepare for long-term shifts toward humanoid robotics.

Nvidia Can Sell AI Chip to China Again After CEO Meets Trump
Source: Wall Street Journal (WSJ)
Authors: Raffaele Huang & Amrith Ramkumar
Date: July 15, 2025
Title: Nvidia Can Sell AI Chip to China Again After CEO Meets Trump
Summary:
Following a direct meeting with President Trump, Nvidia CEO Jensen Huang secured U.S. government approval to resume sales of the company’s H20 AI chips to China. This reverses an earlier Commerce Department restriction that had cost Nvidia billions in lost revenue. The H20 chip, tailored for Chinese markets, will now be delivered under license, and a new downgraded Blackwell-based chip for industrial use is also in development to comply with U.S. export concerns. Nvidia’s rising influence—now valued at over $4 trillion—has made Huang a central figure in the U.S.-China AI and trade negotiations, although bipartisan political scrutiny and policy volatility continue to pose risks.
Relevance for Business
- AI Market Access: Nvidia’s regained ability to sell AI chips in China opens up a key revenue stream in the world’s second-largest economy, critical for sustaining its valuation and market dominance.
- Geopolitical Navigation: Companies heavily involved in semiconductors and AI must prepare for the political dimensions of global trade and navigate shifting regulations and diplomatic tensions.
- Chip Licensing Strategy: The emergence of “customized” chips to comply with export laws could become a model for other U.S. tech firms seeking to balance innovation and compliance.
- Talent & Global Reach: Nvidia’s emphasis on maintaining access to global AI talent and markets illustrates the growing strategic importance of international collaboration in tech development.
Calls to Action
- For Executives: Monitor and prepare for regulatory fluctuations tied to AI technology exports, especially if operating in or with partners in China and the Middle East.
- For Strategy Leaders: Consider “local compliance” product strategies—such as downgraded or geo-customized AI solutions—to sustain international sales while adhering to domestic regulations.
- For Policy Teams: Engage proactively with government stakeholders to ensure alignment on tech export priorities and to mitigate geopolitical risks.
- For Investors: Watch for potential policy shifts and delays in deals (like the pending UAE chip agreement) that may impact company performance and regional AI growth dynamics.

The AI Mirage
By Derek Thompson | The Atlantic | July 2025
Derek Thompson critiques the overhyped expectations surrounding artificial intelligence, arguing that despite massive investment and media frenzy, AI has yet to transform daily life in the way boosters promise. He draws parallels to past tech bubbles like the dot-com era, warning that while AI holds potential, its current utility is still narrow and overestimated. Thompson emphasizes that many breakthroughs (e.g., chatbots, image generators) remain impressive demos, not revolutionary infrastructure. He urges society to temper its enthusiasm, focus on realistic applications, and avoid repeating cycles of inflated tech optimism.
Relevance for Business:
Thompson’s analysis offers a grounded counterpoint for executives being courted by AI vendors. Understanding the difference between meaningful innovation and market hype can prevent costly investments in tools that lack enterprise-grade utility or scalability. The piece encourages a long-term, pragmatic approach to AI integration, rooted in actual business needs rather than fear of missing out (FOMO).
Calls to Action:
- Audit AI Use Cases: Reassess where AI truly adds operational value versus where it’s being adopted for optics.
- Avoid Hype-Based Spending: Focus budgets on tools with clear ROI, not experimental or speculative platforms.
- Educate Stakeholders: Share skeptical analyses like this to balance internal expectations and foster informed decision-making.

What Really Happened When OpenAI Turned on Sam Altman
Originally published in The Atlantic, May 15, 2025. Adapted from Hao’s book, Empire of AI
Summary:
Karen Hao’s investigative essay offers a rare, behind-the-scenes account of the internal power struggle that shook OpenAI in late 2023, culminating in CEO Sam Altman’s abrupt ouster and dramatic return. The piece reveals growing tension between safety-minded leaders like co-founder Ilya Sutskever and CTO Mira Murati, and Altman’s aggressive push for commercialization. Sutskever and Murati raised ethical red flags about Altman’s leadership—especially his tendency to sidestep safety protocols—which led to his temporary removal. However, organizational chaos, investor backlash, and internal disarray swiftly reversed the decision. Hao argues that this moment—“The Blip”—marks a turning point where OpenAI abandoned its original altruistic ideals, prioritizing power, secrecy, and profits in a race toward AGI.
Relevance for Business:
This exposé is essential reading for business leaders navigating the AI revolution. It underscores how governance failures and unchecked executive control can jeopardize both innovation and public trust in transformative technologies. OpenAI’s internal culture war serves as a cautionary tale: when profitability overtakes transparency and safety, the consequences can ripple across markets, geopolitics, and society. For companies investing in or partnering with AI ventures, understanding the ethical fissures within OpenAI is vital to assessing long-term strategic risk.
Calls to Action:
- Reassess AI Governance Models: Businesses deploying AI should revisit how their leadership handles ethical oversight and safety guardrails.
- Push for Transparent AI Practices: Demand clearer safety review processes and model disclosures from AI partners or vendors.
- Monitor Executive Culture: Aligning innovation with long-term values requires executive behavior to reflect responsibility, not just ambition.
- Support Alternative Players: Consider backing emerging AI firms founded by former OpenAI leaders like Sutskever and Murati who advocate for more balanced development.
“OpenAI has become everything it said it wouldn’t be,” Hao concludes—a warning to any business watching AI’s rise without questioning who controls the narrative.
https://www.theatlantic.com/technology/archive/2025/05/karen-hao-empire-of-ai-excerpt/682798/: July 22, AI Updates this week.↑ Back to Top