Amazon Web Services has released a step-by-step integration guide showing developers how to connect Amazon Bedrock AgentCore AI agents to Slack, using infrastructure-as-code tooling to automate the full deployment.

The guide, published on the AWS Machine Learning Blog, targets developers who want to surface AI agents inside Slack without building custom middleware from scratch. It positions Bedrock AgentCore — AWS's managed runtime for deploying and scaling AI agents — as the backend, with Slack acting as the conversational front end employees already use daily.

Three Lambda Functions Do the Heavy Lifting

The architecture described centres on three specialised AWS Lambda functions, each handling a distinct part of the integration. According to AWS, separating responsibilities across functions is intentional: one handles Slack's URL verification challenge (a security requirement Slack enforces before activating event subscriptions), another processes incoming messages and routes them to the agent, and a third manages responses back to Slack channels or direct messages.

This separation matters for reliability. Slack requires an HTTP 200 response within three seconds of sending an event, or it will retry — and retry again. By isolating the acknowledgement function from the heavier agent-invocation logic, the pattern avoids timeout failures that would otherwise cause duplicate messages.

Separating the acknowledgement function from agent-invocation logic is what prevents Slack's aggressive retry behaviour from flooding an agent with duplicate requests.

The entire infrastructure is defined and deployed using AWS Cloud Development Kit (AWS CDK), AWS's code-first infrastructure framework. This means developers can version-control their Slack-to-agent pipeline, replicate it across environments, and tear it down cleanly — advantages that manual console configuration cannot offer.

Conversation State and the Multi-Turn Problem

One of the more substantive aspects of the guide is its treatment of conversation management. Slack interactions are inherently stateless at the API level — each incoming event arrives as an independent webhook call with no built-in memory of what came before. For an AI agent to hold a coherent conversation across multiple messages, the integration must explicitly track session state.

AWS's approach, according to the post, implements conversation management patterns designed to work across many agent use cases rather than a single narrow scenario. While the post does not detail the exact persistence mechanism — whether Amazon DynamoDB, ElastiCache, or another store — the pattern acknowledges that session continuity is a first-class requirement, not an afterthought.

This is a meaningful design choice. Many early chatbot integrations treated each message as isolated, producing agents that forgot context the moment a user sent a follow-up question. Building state management into the CDK stack from the start avoids retrofitting it later.

What Developers Need to Get Started

The integration requires an active AWS account with Bedrock AgentCore access, a Slack workspace where the developer has permission to install apps, and familiarity with AWS CDK. The CDK framework supports TypeScript, Python, Java, and .NET, so teams are not locked into a single language for their infrastructure code.

Bedrock AgentCore itself is a commercial AWS service, meaning costs accrue based on agent invocations and runtime. AWS does not publish flat-rate pricing for AgentCore; charges depend on the underlying model invocations, Lambda execution time, and any additional services such as storage or API Gateway calls used in the stack. Developers should factor this in before deploying to production traffic.

The Slack app configuration — creating the app, setting OAuth scopes, and registering the event subscription URL — sits outside AWS and requires navigating Slack's developer portal. The guide addresses this as part of the setup sequence, which reduces the risk of developers getting stuck on Slack-side configuration before their AWS infrastructure is even running.

Practical Fit for Enterprise Workflows

The significance of this integration extends beyond the technical tutorial. Slack is deeply embedded in how many engineering, product, and operations teams communicate. Placing an AI agent inside Slack — rather than asking employees to visit a separate interface — reduces the friction that often limits AI tool adoption in practice.

Use cases enabled by this pattern include internal knowledge assistants that answer questions in channels, automated triage bots that route support requests, and agent workflows triggered by specific Slack commands or mentions. Because the infrastructure is defined in CDK, teams can fork and customise the stack for each use case without starting from zero.

AWS's decision to publish this as a structured CDK pattern rather than a conceptual overview signals a shift toward deployment-ready guidance. The three-function architecture, the attention to Slack's timing constraints, and the explicit conversation management layer suggest the guide was written by engineers who encountered these failure modes and designed around them.

What This Means

Developers with existing Bedrock AgentCore investments now have a production-oriented blueprint for embedding those agents in Slack, with infrastructure-as-code from day one — meaningfully lowering the bar for enterprise AI agent deployments inside tools employees already use.