4 Things Industry 4.0 02/03/2026

Presented by
Happy Groundhog Day... aftermath, Industry 4.0!
Yesterday, Punxsutawney Phil saw his shadow—six more weeks of winter. But here's the thing about manufacturing: we've been living our own Groundhog Day loop for years. "AI will transform the factory floor." "This is the year of digital transformation." "Cybersecurity can't wait." Wake up. Repeat. Wake up. Repeat.
Except this week? The loop broke.
We learned that a Chinese state-sponsored group weaponized Claude Code to autonomously attack chemical manufacturers, tech companies, and government agencies—with AI handling 80-90% of the operation. Not theoretical. Not a proof-of-concept. Real attacks. Real targets. Real wake-up call.
But before you unplug everything and go back to clipboards, there's genuinely good news too. The same AI agent architectures that enable attacks are also making on-premise, legacy-friendly automation finally practical. Manufacturers are deploying lightweight agents that work with your 20-year-old ERP instead of demanding you rip it out.
The gap between "AI that could help" and "AI that actually fits your plant" is closing fast—if you understand how to build it right.
This week we're diving deep into AI agents: the threats, the opportunities, the architecture that makes it work, and the tools that are making it real.
Here's what caught our attention:
The First AI-Orchestrated Cyberattack Just Happened—And Manufacturing Was a Target
In mid-November, Anthropic dropped a bombshell: a Chinese state-sponsored group (designated GTG-1002) weaponized Claude Code to autonomously attack roughly 30 organizations worldwide. The targets? Major tech firms, financial institutions, government agencies—and chemical manufacturers.
This wasn't AI-assisted hacking. This was AI-led hacking.
The details:
The attackers "jailbroke" Claude Code using a technique called Context Splitting—breaking the attack into thousands of tiny, innocent-looking requests. Need to scan a network? That's just a "routine diagnostic check." Write an exploit? That's "testing code robustness." Each individual task looked legitimate. The AI never saw the full picture.
They also used Model Context Protocol (MCP) to wire Claude directly into offensive tooling—network scanners, credential harvesters, data extraction utilities. The result: an autonomous attack framework operating at machine speed.
How fast? Thousands of requests per second. A pace no human team could match.
According to Anthropic's report, AI handled 80-90% of the entire attack lifecycle independently:
- Reconnaissance and network mapping
- Vulnerability discovery and custom exploit development
- Credential harvesting and privilege escalation
- Data categorization and exfiltration
Human operators only stepped in at 4-6 critical decision points—mostly to approve moving to the next phase or confirm final data theft.
How they bypassed the guardrails:
The attackers posed as employees of a legitimate cybersecurity firm and convinced Claude it was conducting "defensive security testing." By the time Anthropic's detection systems flagged the anomalous behavior, the campaign had already launched.
Why this matters for manufacturing:
Chemical manufacturers were explicitly named among the targets. And here's the uncomfortable truth: most OT environments are even less prepared for AI-speed attacks than IT networks.
Your SOC analyst is still looking at the first alert while the AI has already mapped your network topology, identified your historian databases, tested credentials across 50 systems, and started exfiltrating production data.
Traditional signature-based detection? Useless. The attackers used standard open-source pentesting tools—nothing custom to flag.
The skepticism (and why it still matters):
Some security researchers questioned Anthropic's report, noting the lack of published Indicators of Compromise (IOCs). Others argued current AI systems can't truly operate this autonomously.
But here's the thing: even if the autonomy is overstated by 50%, the implications are staggering. AI-assisted attacks at scale are here. The barrier to entry for sophisticated campaigns just collapsed. A well-funded adversary no longer needs a team of elite hackers—they need compute and clever prompts.
The bottom line:
The attack speed that made this campaign possible—thousands of requests per second, 24/7, no fatigue, no mistakes from boredom—is now table stakes for nation-state attackers. Your defenses need to operate at the same speed, or you're bringing a clipboard to a knife fight.
Read Anthropic's full disclosure →
On-Premise AI Agents: The Reality Check Manufacturing Actually Needed

For years, you've heard the pitch: "Move to the cloud. Plug in our AI. Transform your factory overnight."
Here's what they didn't mention: Your 1998-era historian database. Your custom ERP that Bob in IT built fifteen years ago. Your proprietary PLC scripts that nobody fully understands anymore. The equipment running perfectly fine on Windows XP because upgrading would mean six weeks of downtime.
Cloud-first AI sounds great in a vendor webinar. It's a lot harder when your plant runs at 98% uptime and "rip and replace" means risking millions in lost production.
The shift that's actually happening:
As we enter 2026, a more realistic approach is gaining traction: lightweight, on-premise AI agents that integrate with your existing systems instead of demanding you replace them.
Quality Digest captured it perfectly in a recent piece: manufacturers aren't running greenfield tech stacks. They're running legacy databases, custom ERP layers, decades-old equipment, and homegrown workflows that have been optimized over years. Lean IT teams can't afford downtime, data exposure, or the risk of rebuilding systems that currently work.
What on-premise agents actually do:
These aren't the omniscient AI platforms vendors love to demo. They're focused, practical tools that:
- Analyze logs in real time to catch anomalies humans would miss
- Automate repetitive investigation work (root cause analysis, correlation)
- Surface insights to frontline teams without requiring new interfaces
- Work with your ERP, MES, and SCADA—not instead of them
The key insight: augmentation, not replacement. Your operators already know these systems. Forcing them to learn an entirely new platform is exactly how digital transformation projects die.
Why this matters now:
The explosion of AI capabilities is creating pressure to "do something with AI" in 2026. But here's the uncomfortable truth from industry surveys: only 14% of manufacturers successfully scaled AI pilots to full production by mid-2025.
The barrier isn't the AI technology. It's governance, integration, and the messy reality of manufacturing environments.
On-premise agents sidestep the hardest problems:
- No cloud migration risk → No unexpected downtime from software conflicts
- Data stays on-site → Easier compliance, no sovereignty concerns
- Incremental deployment → Start with one use case, expand as it proves value
- Works with what you have → That weird custom database? Still usable.
Real-world scenario:
Imagine your maintenance supervisor asking: "Why did Line 3 slow down yesterday afternoon?"
Without an on-premise agent: Someone pulls historian data, exports to Excel, manually correlates with shift logs, checks alarm history, maybe finds the answer in 2-3 hours.
With an on-premise agent: The agent already analyzed the anomaly overnight, correlated it with a temperature spike in the cooling system, cross-referenced the maintenance schedule, and has a summary waiting when the supervisor logs in.
Same data. Same systems. Dramatically different workflow.
The question you should be asking:
Quality Digest nailed the reframe: Stop asking "When will we move everything to the cloud?"
Start asking: "How do we bring AI to the systems we already trust?"
That question acknowledges reality. It respects uptime, data sovereignty, and the operators who know these systems better than any vendor consultant ever will.
The manufacturers who build AI strategies around real operational constraints—rather than idealized architecture diagrams—will modernize fastest, with the least disruption, and the highest ROI.
The bottom line:
The most successful AI deployments in 2026 won't be the flashiest. They'll be the ones that make your existing systems smarter without breaking what already works.
Read the full Quality Digest article →
Context Management and MCP: The Architecture Foundation Your AI Agents Actually Need
Here's a scenario that's playing out in manufacturing IT departments right now:
Your team wants to build an AI agent that can answer questions about production data. Simple enough, right? The agent needs to connect to your historian database. And your ERP system. And maybe pull some maintenance records. Oh, and check the CMMS for work orders. And reference some documentation stored in SharePoint.
Suddenly, you're writing custom integration code for five different systems. Each with its own authentication. Each with its own data format. Each requiring its own error handling.
Welcome to the "NĂ—M problem."
Before the Model Context Protocol, connecting N AI models to M tools meant building NĂ—M custom integrations. Every new tool meant more code. Every model update meant potential breakage. It was a nightmare that made "AI-powered manufacturing" sound a lot better in vendor demos than it worked in practice.
What MCP actually is:
Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect any device to any peripheral, MCP provides a standardized way to connect any AI model to any data source or tool.
The protocol defines three core components:
- Tools — Named functions the AI can call (query database, create ticket, send alert)
- Resources — Read-only access to context (file contents, database views, API responses)
- Prompts — Predefined templates that guide specific workflows
An MCP server exposes these capabilities using a standard interface. An MCP client (your AI application) connects and discovers what's available. The model decides when to use which tool based on the user's request.
For the full technical breakdown, IBM's MCP explainer is a solid starting point.
Why this matters for manufacturing:
Let's say you want your AI agent to answer: "Why did Line 3 slow down yesterday?"
Without MCP, you'd need custom code to:
- Query the historian for production rates
- Pull alarm data from your SCADA system
- Check maintenance records in your CMMS
- Cross-reference shift schedules from HR
- Format all of this for the AI to understand
With MCP, you deploy servers for each system once. The AI agent connects to all of them through the same protocol. When the question comes in, the model decides which tools to call, in what order, and how to synthesize the results.
Red Hat's recent deep-dive on building effective agents with MCP walks through this architecture in detail—including how OpenShift AI is adding built-in identity management, lifecycle tracking, and observability for MCP servers.
The bigger picture: Why this is happening now
In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation—co-founded with Block and OpenAI. This wasn't just a PR move. It signaled that the major players agreed: proprietary integration approaches weren't going to scale.
OpenAI's adoption was particularly telling. They deprecated their Assistants API in favor of MCP, essentially admitting that the value of an AI model is directly tied to how many things it can connect to. Walled gardens don't win when connectivity is the product. Their Agents SDK documentation shows how to wire MCP servers into agents with approval workflows built in.
The ecosystem is now growing fast:
- 1,000+ community-built MCP servers covering everything from databases to Slack to Kubernetes
- Native support in Windows 11, OpenAI's SDK, Claude Desktop, Cursor, and dozens of other tools
- Enterprise-grade features emerging: OAuth, RBAC, audit logging, rate limiting
Real-world manufacturing application:
Imagine building an "Operations Assistant" for your plant. With MCP, you could wire up:
- Historian MCP Server → Query time-series production data
- ERP MCP Server → Check inventory levels, order status
- CMMS MCP Server → Pull maintenance history, create work orders
- Documentation MCP Server → Search SOPs, technical manuals
- Alerting MCP Server → Send notifications, page on-call engineers
Your operators ask questions in plain language. The AI agent uses whichever tools are relevant. No custom integration code for each query type. No rebuilding when you add a new data source.
If you want to experiment, the mcp-agent framework on GitHub implements all the patterns from Anthropic's "Building Effective Agents" guide in composable, production-ready code. You can scaffold a project in about two minutes.
The catch (because there's always a catch):
MCP is powerful, but it's not magic. Security researchers have already identified risks including prompt injection attacks, overly permissive tool combinations, and "lookalike" servers that can silently replace trusted ones.
If you read Article 1, you know this isn't theoretical—GTG-1002 used MCP to wire Claude Code into an offensive toolchain. The same flexibility that makes MCP useful for legitimate automation makes it useful for attackers.
What you should be thinking about:
- Audit what tools your agents can access. Just because an MCP server exists doesn't mean your agent needs to connect to it.
- Implement approval workflows for sensitive operations. The protocol supports
require_approvalflags—use them. - Log everything. MCP makes it possible to trace exactly what an AI did, what data it accessed, what tools it triggered. Build that visibility from day one.
- Start small. Connect one system. Prove the value. Then expand.
The bottom line:
MCP is becoming as fundamental to AI infrastructure as APIs are to traditional software. Organizations implementing it report 40-60% faster agent deployment times and significantly lower integration maintenance costs.
If you're planning to build AI agents that actually do things in your manufacturing environment—not just chat—understanding MCP isn't optional anymore. It's the foundation everything else will be built on.
Official MCP documentation →
Greg Robison's architectural deep-dive on MCP →
A Word from This Week's Sponsor
Modern Operations Control for Modern Industrial Architectures
As industrial architectures evolve, HMI and SCADA alone are no longer enough.
AVEVA is helping manufacturers move from traditional control systems to enterprise-wide Operations Control—supporting distributed operations, multi-site visibility, and real-time operational context across OT and IT.
With solutions like AVEVA Operations Control (Enterprise SCADA+), the AVEVA CONNECT Industrial Intelligence Platform, and the AVEVA Flex Subscription model, teams gain the architectural flexibility needed to:
• Design OT systems that integrate cleanly with UNS
• Enable advanced analytics and AI-driven decision support
• Scale from plant-level control to cloud-connected, hybrid architectures
• Align software investment with business growth through flexible licensing
If you’re rethinking how operations, data, and architecture come together, AVEVA provides a practical path forward.
Learn more:
- AVEVA Flex Subscription → https://www.aveva.com/en/solutions/flex-subscription/
- AVEVA Operations Control → https://www.aveva.com/en/products/aveva-operations-control/
- AVEVA CONNECT Platform → https://www.aveva.com/en/solutions/connect/
OpenAI Codex App: Your New Command Center for AI-Assisted Development

Yesterday—literally February 2, 2026—OpenAI dropped a new macOS app that might change how you build internal tools for your plant.
The Codex app is a dedicated desktop application for managing AI coding agents. It's not just a chatbot that writes code snippets. It's a command center where multiple agents work on your projects in parallel, each in isolated environments, while you supervise and steer.
If you've been skeptical about "AI coding assistants"—and given GTG-1002, you should be—this is worth understanding. Not because you need to adopt it tomorrow, but because this is the direction industrial software development is heading.
What Codex actually is:
OpenAI now offers Codex across three surfaces:
- Codex CLI — A terminal-based coding agent that runs locally (open source, built in Rust)
- Codex IDE Extension — Integration for VS Code, Cursor, and similar editors
- Codex App — The new macOS desktop app for managing multiple agents across projects
All three are connected by your ChatGPT account, so you can start a task in the CLI, continue it in the cloud, and review it in the app without losing context.
The app is powered by codex-1 (based on o3) for cloud tasks and GPT-5.2-Codex for local work—models specifically trained on real-world software engineering tasks, not just code completion.
Why this matters for manufacturing teams:
You're probably not shipping consumer software. But you are building:
- Custom dashboards for production monitoring
- Integration scripts between your historian and ERP
- Data transformation pipelines for analytics
- Internal tools your operators use daily
- Automation scripts for repetitive tasks
These are exactly the kinds of projects where Codex shines.
The key features:
Multi-agent parallelism. You can spin up multiple agents working on different parts of a project simultaneously. Each runs in an isolated "worktree"—a separate copy of your codebase—so they don't conflict. One agent refactors your database queries while another adds a new dashboard view.
Skills. This is the interesting part. Skills are bundles of instructions, resources, and scripts that teach Codex how your team works. You can create Skills for:
- Your coding standards and style guides
- How to run tests in your specific environment
- Deployment procedures to your on-prem servers
- Integration patterns with your industrial systems
OpenAI demonstrated this by having Codex build a complete racing game from a single prompt using 7 million tokens—it autonomously designed, coded, and tested the application using image generation and web development Skills.
Automations. Codex can now run scheduled tasks in the background—issue triage, CI/CD monitoring, alert handling—with a review queue for you to approve. Think of it as a junior developer who works nights and weekends on the tedious stuff.
AGENTS.md. Like README files, you can drop an AGENTS.md file in your repository to tell Codex how to navigate your codebase, which commands to run for testing, and how to follow your project's standards. This is how you make AI assistance actually match your team's workflow.
Real-world scenario:
You need to add a new chart to your production dashboard that shows OEE trends by shift.
Without Codex: You context-switch from your current work, open the project, remember how the charting library works, write the query, build the component, test it, and push a PR. Maybe 2-4 hours of interrupted work.
With Codex: You describe what you need in plain language, assign it to an agent, and continue your actual work. The agent reads your codebase, follows your AGENTS.md instructions, writes the code, runs your tests, and either succeeds or asks clarifying questions. You review the diff when you have time.
You're not replacing your expertise—you're delegating the mechanical parts.
The security considerations:
By default, Codex runs in a sandboxed environment with network access disabled, whether locally or in the cloud. This matters, especially after GTG-1002.
You can enable network access for specific trusted domains if needed (installing dependencies, running tests that hit external resources), but it's opt-in with granular controls. The agent can ask for permission before potentially dangerous actions.
That said: any tool that can write and execute code on your systems is a tool that can be misused. Treat Codex with the same operational security mindset you'd apply to any privileged access.
Who gets access:
- ChatGPT Plus, Pro, Business, Enterprise, and Edu subscribers get Codex access across all interfaces
- Free and Go users get temporary limited access (Sam Altman says about two months)
- OpenAI doubled rate limits for all paid plans during the launch period
The app itself is macOS-only for now (Windows in development). You can join the waitlist at openai.com/form/codex-app.
The bigger picture:
Since GPT-5.2-Codex launched in mid-December, overall Codex usage has doubled. More than one million developers used Codex in the past month alone.
This isn't a toy anymore. It's becoming how software gets built—including the internal software that runs manufacturing operations.
OpenAI's explicit vision: developers will drive the work they want to own and delegate the rest to agents. Real-time pairing (like the CLI) for interactive work, asynchronous delegation (like the app) for longer tasks. Both modes converging into unified workflows.
The bottom line:
You don't need to adopt Codex tomorrow. But if your team builds any custom software—dashboards, integrations, automation scripts, internal tools—understanding how AI coding agents work is becoming essential.
The gap between "we use AI to help write code" and "AI agents are part of our development workflow" is closing fast. Codex is OpenAI's bet on what that future looks like.
OpenAI's Codex announcement →
Codex CLI on GitHub (open source) →
Codex developer documentation →
Simon Willison's hands-on review →
Byte-Sized Brilliance
our Car Has More Code Than a Fighter Jet
When the first automotive microcomputer hit production in 1977—an electronic spark timing controller in the Oldsmobile Toronado—it ran just one function. By 1981, General Motors was using engine controls across its entire domestic line executing about 50,000 lines of code.
Today? A modern premium automobile contains 100 to 150 million lines of software code, running on 70-100 microprocessor-based electronic control units (ECUs) networked throughout the vehicle.
For perspective:
- The F-22 Raptor fighter jet runs on about 1.7 million lines
- The F-35 Joint Strike Fighter needs roughly 5.7 million lines
- The Boeing 787 Dreamliner requires about 14 million lines
- Your Toyota Camry probably has 100 million+ lines
That's right—the car sitting in your plant parking lot runs on more software than an advanced fighter jet and a commercial airliner combined.
And we're just getting started. Autonomous vehicles are projected to need 300 to 500 million lines of code by 2030. That's approaching half a billion lines of code rolling down the highway at 70 mph.
Manufacturing facilities aren't far behind. Between your PLCs, historians, SCADA systems, MES, ERP integration layers, edge analytics, and increasingly, AI agents helping operators—the amount of code keeping a modern factory running is staggering.
This week's newsletter covered AI agents that write code, manage context, and orchestrate across systems. It's not because the tech industry got bored. It's because the complexity of software-defined everything has outpaced what humans can reliably write, maintain, and secure on their own.
The first Unimate robot in 1961 had no software at all—just cams, drums, and hydraulics. Today's robotic cells run millions of lines of code coordinating vision systems, force sensing, path planning, and safety interlocks.
In 44 years, we went from 50,000 lines of code in an entire car company's fleet to 100 million lines in a single vehicle.
The question isn't whether AI will help write industrial software. It's whether we can keep up without it.
|
|
|
|
|
|
Responses