AT A GLANCE
- Fragmented Pipes Between Systems of Record and Intelligence: Tool sprawl across WLA, ETL, iPaaS, ESB, streaming, Kafka, Python, and Airflow creates brittle connections. Teams need a unifying orchestration layer with governance, observability, and SLA management.
- AI Puts Unstructured Data to Work: LLMs now extract signal from emails and calendars, which raises pressure to harden the pipelines that feed new systems of intelligence without ripping out existing stacks.
- Reliability at Scale Needs Hardening: Large enterprises see 1–5% job failures across huge volumes. By baking LLM-driven logic into jobs to handle schema changes, upstream delays, and outages, teams can push failures toward about 0.1%.
- Ad Hoc Automation Requires Guardrails: Business users will build agents. MCP servers multiply. Without controls, data can bypass IT and land in external LLMs. A control plane must enforce masking, scoped access, auditing, and policy-based retrieval.
I reviewed a Broadcom Summit conversation where Serge, a Broadcom GM, shared blunt lessons on AI and automation.
The focus? Customers want efficiency and connectivity across the enterprise.
He framed the problem as linking two pillars: systems of intelligence for insights and systems of record for critical data.
His opener set the tone: “the biggest challenge is the fragmentation of tools… the pipelines from the systems of record to the systems of intelligence.”
From there, the talk moved to why LLMs unlock unstructured data and why business pressure will harden those pipelines fast.
Ready for the key takeaways?
What Changes with AI (and What Doesn’t)
AI changes the systems of intelligence conversation.
As Serge put it, “we’re building the steam engine” for data. LLMs unlock unstructured sources like email and calendar invites, so teams can derive the real state of opportunities instead of relying on manual CRM updates.
Business stakeholders will push harder to tighten the connection between systems of record and systems of intelligence.
What doesn’t change is the stack mix. Workload automation, iPaaS, ESB, streaming like Kafka, and data engineering tools such as Python and Airflow continue to run the connective tissue.
Fragmentation and brittleness remain real. From my chair, the job now is to harden those pipelines, not rip and replace, and to prepare for ad hoc insights as business users build agents that query source data under clear controls.
Designing the Control Plane
Start with orchestration. Enterprises run overlapping automations across Airflow, streaming, managed file transfer, and classic workload automation.
As Serge said, teams need “an orchestration layer across all of these different technologies,” plus governance and observability with SLA management.
Next, strengthen controls: scoped access, data masking, and auditability so organizations can prove that automation safeguarded data.
Then enable ad hoc work safely. A business prompt should trigger platform-built retrieval, filtering, masking, and scoping behind the scenes, not ad hoc exports to public LLMs.
With this approach, different automation technologies “will continue to be there in the next decade or decades to come,” and IT sets policy through a control plane.
Ad Hoc Automation – With Guardrails
Business users will spin up ad hoc insights, but control matters.
Serge warned that MCP servers solve authentication and authorization, yet many users won’t request API keys. If IT doesn’t enable governed paths, people export data to public LLMs, and those systems can retain derived information outside company control. EU deletion rights then become hard to honor.
We need a control plane. Orchestrate across existing automation, and strengthen governance with observability, SLA management, access scoping, masking, and auditability. As Serge described, a business question should trigger platform-built automation that retrieves, filters, masks, and scopes data automatically.
From Brittle to Resilient
Fragmented stacks break under load. Serge called out “the fragmentation of tools” that form the pipelines from systems of record to systems of intelligence.
Large enterprises feel it in production with 1–5% job failures across massive volumes, including environments that run millions of jobs a month and some that hit 5–10 million a day. That waste adds toil and delays.
So we harden the pipelines. The team embeds LLM-driven logic into automation to handle schema changes, upstream lateness, and temporary system outages.
The goal is practical: move typical failure rates from about 3–4% toward 0.1%. This matters for core business flows like supply chain automation and financial reconciliation.
Who Leads the Shift
Enterprise Architecture leads the charge.
Architects set direction, but they often try to reinvent tooling. Operations teams that run automation need to educate architects and show how current platforms enable connectivity between systems of record and systems of intelligence. That partnership keeps design patterns from fragmenting and preserves visibility across core jobs.
Automation and Operations also need stronger control. As new tools and agents proliferate, teams should anchor on orchestration, governance, observability, and SLA management.
The near-term focus stays practical: harden automations with built-in intelligence and enable ad hoc automation inside a governed control plane with masking, scoped access, and auditability.
Conclusion
AI raises the stakes: connect systems of record to systems of intelligence with orchestration, governance, and LLM-hardened automation.
My team helps you put a control plane in place fast.
Schedule your free Automation Readiness Assessment now. Just say Bob offered it to you.











