About Brian

BRIAN P. CONNELLY

AI Practitioner and Educator

From Expert Systems to Agentic AI

Most people entering the AI space today are learning agent orchestration patterns, context engineering, and tool-use architectures as novel concepts. Brian Connelly has been working with the underlying principles since 1988.

His career spans the full arc of applied artificial intelligence, from building expert systems for Fortune 500 companies in the late 1980s through current daily practice with generative and agentic AI. What makes his perspective unusual is the combination of deep technical lineage, three decades of enterprise systems architecture, and an earlier career in clinical social work that grounds his approach in how humans actually think, learn, and make decisions.

He is one of a small number of practitioners who can trace a direct conceptual line from the knowledge engineering and workflow coordination frameworks of the 1980s to today’s agentic AI design patterns, and who can explain that connection in accessible, non-technical language.

EDUCATION AND CREDENTIALS

BA, Organizational Development/Cognitive Psychology, Ramapo College MIT FinTech Certificate Andrew Ng / DeepLearning.AI, Agentic AI Design Patterns (ongoing) Google Generative AI Coursework (ongoing)

THE BRIDGE: WHY THE LINEAGE MATTERS

The current AI landscape is dominated by a hammer-looking-for-nails problem. Organizations acquire powerful generative AI tools and then scramble to find applications for them. The question driving most AI adoption today is “what can we do with this technology?”

Knowledge Engineering, the discipline Mr. Connelly trained in and practiced throughout his career, starts from the opposite direction. It asks: where does critical knowledge live, how does it move, where does it break down, and what would it take to make it available at the point of decision? The technology is selected to serve the answer, not the other way around.

This distinction matters enormously right now. The organizations getting real value from AI are not the ones deploying the most tools. They are the ones that understood their knowledge problems first: which decisions depend on expertise that lives in one person’s head, which processes break when a key employee retires, which institutional memory disappears when a team reorganizes. Knowledge Engineers were trained to diagnose these problems through structured methodologies, including direct observation, expert interviewing, and workflow analysis, before writing a single line of code or selecting a single platform.

The current revolution in agentic AI rests on concepts that have deep roots in this discipline and in the computational philosophy that informed it. Today’s practitioners talk about “context engineering,” “tool use,” “agent loops,” and “skills” as if they emerged fully formed in 2024. They didn’t.

In the late 1980s, Fernando Flores and Terry Winograd published “Understanding Computers and Cognition,” which modeled work as cycles of speech acts: requests, promises, declarations, and assertions moving through predictable phases of preparation, negotiation, performance, and acceptance. Flores built commercial software (The Coordinator) around this framework.

Brian Connelly was trained in and practiced this methodology throughout his knowledge engineering career, applying it to expert system development, knowledge acquisition, and enterprise workflow design.

The discipline required identifying the right problem before building the system, understanding the human knowledge flows before automating them, and recognizing that the most sophisticated technology deployed against the wrong problem produces nothing of value.

The mapping to contemporary agentic AI is direct and precise. Agent planning loops are conversations for action. Tool calls are computational speech acts. Breakdown resolution (when an agent’s action fails and it reassesses) is exactly what Flores described as the engine of real work.

The “skills” architecture that enabled OpenClaw and similar personal agents, where a folder containing prompts and code maps user intent to action and is only loaded when relevant, is a modern implementation of the just-in-time knowledge retrieval that knowledge engineers were designing 35 years ago.

This isn’t an abstract analogy. It is the same ontology, re-implemented with large language models instead of rule-based expert systems. The difference between organizations that will thrive with AI and those that will waste millions on it comes down to whether they approach adoption as a technology deployment problem or a knowledge engineering problem. Understanding this lineage provides a conceptual and practical advantage that purely forward-looking AI education cannot offer.

CURRENT PRACTICE (2019 to Present) Piketown Enterprises Inc., Founder

Active daily practitioner of generative AI tools including Claude, DeepSeek R1, Google Gemini, Perplexity, and ChatGPT. Currently studying agentic AI design patterns through formal coursework and hands-on experimentation, with particular focus on how orchestration patterns, multi-agent coordination, and context engineering connect to Flores and Winograd’s Language as Action philosophy.

Author of five published books on digital monetary education, all produced in deep collaboration with AI, representing a working model of human-AI creative partnership. The author provides imagination, direction, domain expertise, and editorial judgment. AI provides precision, linguistic support, research augmentation, and drafting capability. This is not a novelty. It is a daily professional workflow that has produced hundreds of thousands of words of published material across multiple genres and dozens of articles on related topics.

ENTERPRISE AI AND KNOWLEDGE SYSTEMS CAREER

Hayward Industries Inc., Systems Architect (2012 to 2019). Implemented enterprise e-discovery and knowledge management systems, including an email legal search and discovery platform that saved $5MM in outside legal costs by bringing search, preservation, and retrieval of electronic communications in-house for litigation and regulatory compliance. Led company-wide GDPR compliance implementation leveraging automated data governance frameworks. Managed knowledge systems supporting global operations across marketing, engineering, and customer service.

Workgroup Associates Inc., Senior Consultant and Principal (1993 to 2012) Nearly two decades designing and deploying knowledge management, intelligent automation, and enterprise infrastructure for Fortune 500 clients. This body of work represents the practical application of knowledge engineering principles at scale, across industries, and under real regulatory and operational pressure.

Key engagements included installing SEC 17(a)-4 compliant intelligent archiving solutions at the New York Stock Exchange, preventing multi-million dollar regulatory penalties. Developed document management and knowledge systems at GE Asset Management supporting 140,000+ users. Led the architectural separation of 18,000 Cap Gemini users from 24,000 Ernst & Young users across 21 countries, a complex knowledge transfer and system integration project. Redesigned mission-critical infrastructure at The Westfield Group, IBM, Discovery Communications, and Novartis. Saved IBM $3,000,000 through parts distribution automation. Each of these engagements required solving the same fundamental problem that agentic AI now addresses computationally: how to capture, structure, and deploy institutional knowledge so the right information reaches the right decision-maker at the right moment.

Public Service Electric & Gas, Strategic Systems Analyst (1991 to 1993) Designed an executive decision support laboratory reducing decision timeframes and operational costs. Implemented a comprehensive Knowledge Network for systematically transferring complex operational knowledge from retiring subject matter experts to incoming staff, a knowledge acquisition and retention challenge that remains central to enterprise AI adoption today. Served as primary evaluator of emerging AI and knowledge technologies for utility business unit applications.

John Hancock Life Insurance, Knowledge Systems Developer (1989 to 1991) Built a comprehensive expert system to accelerate assimilation of corporate group business coverage, substantially reducing time-to-close for new group business deals through automated knowledge application. Applied structured knowledge acquisition methodologies to capture complex underwriting and business development expertise. This was applied AI in a commercial insurance environment before the term “AI” carried its current cultural weight.

XICOM Inc., Knowledge Engineer (1988 to 1989) Member of a corporate think tank in Sterling Forest, NY providing specialized knowledge engineering services for Fortune 500 clients including International Paper and PepsiCo. Developed expert systems for troubleshooting multi-million dollar manufacturing processes. Applied knowledge capture and representation techniques across manufacturing and distribution domains. This was the starting point: learning to extract tacit knowledge from human experts and encode it in systems that could reason, recommend, and act.

FOUNDATIONAL CAREER

Clinical Social Worker, Newark, NJ (1976 to 1987) Eleven years of direct practice in one of America’s most challenging urban environments. This experience built the foundation for everything that followed: deep understanding of how people actually process information and make decisions under pressure, how institutions succeed and fail, and the relationship between individuals and the systems they inhabit. The pattern recognition, systems thinking, and human-centered perspective developed during this period directly informed a technology career that has always prioritized how systems serve people rather than the reverse.

TECHNICAL COMPETENCIES

Current generative AI: Claude (Anthropic), DeepSeek R1, Google Gemini, Perplexity, ChatGPT Agentic AI: Studying orchestration patterns, multi-agent coordination, tool-use architectures, context engineering, skills-based agent design Historical AI: Expert systems, knowledge acquisition, knowledge representation, decision support systems, rule-based reasoning Theoretical foundations: Flores/Winograd Language as Action, speech act theory, Conversations for Action framework Enterprise systems: Cloud architecture (AWS, Google), system migration, high-availability clustering, identity and access management, compliance (GDPR, HIPAA, SEC)