AWS re:Invent 2025 - Build production AI agents with the Strands Agents SDK for TypeScript (AIM3331)
This presentation by Ryan Coleman and Nicholas Clegg introduces the Strands Agents SDK for TypeScript, a new release designed to help developers build enterprise-ready AI agents (0:32).
Here are the key takeaways from the presentation:
Strands Agents SDK for TypeScript (Preview Release)
- This is a new preview release (0.1 on GitHub and npm) of the Strands SDK, bringing the same simple interface as the Python SDK to TypeScript (1:28).
- It currently supports building single agent systems and includes basic features like Bedrock and OpenAI model integration, async and non-async operations with streaming, full agent state, conversation management, and hooks for life cycle inspection (1:57).
- Limitations include the absence of multi-agent patterns and some other functionalities, which will be fast-followed in the coming weeks (1:59).
Model-Driven Agent Approach

- Strands defines agents as systems that run tools in a loop to solve a goal (3:08).
- The core idea is to let the model define its own workflow, rather than relying on deterministic, workflow-driven approaches (4:19, 15:01).
- Developers primarily focus on defining the goal (system prompt) and providing the tools the agent can use (14:15).
- The agent uses a reasoning model to determine how to use tools, assess results, and iterate until the goal is achieved (3:39).
- Strands supports any model provider (e.g., Gemini, OpenAI, Anthropic, Nova) and allows for custom gateways (4:54).
- Agents can use other models as tools, for instance, a reasoning model using an image generation model (6:00).
- Strands embraces open standards like MCP servers and OpenTelemetry for traces (6:26).
- Any custom function can be exposed as a tool to the agent (7:04).
- An agent itself can be a tool, enabling modularity and breaking down complexity, a pattern known as the supervisor pattern (7:35).
Design Principles (Tenants)
- Simple: Easy to prototype on a laptop and scale to production (9:27).
- Extensible: Applies to model providers, tools, and hooks for controlling the agent's life cycle (9:47).
- Composed: Components are designed to work together, guiding developers to layer on functionality (10:22).
- Accessible Humans and Agents: Exposes documentation in formats understandable by both humans (HTML) and models (Markdown via LM.ext standard) (10:38).
- Common Standards: Supports major emerging standards (11:20).
History and Impact of Strands
- Strands originated from Amazon's internal Q and Quro teams, developed to help dozens of internal teams onboard agentic functionality efficiently (11:31).
- It reduced the time from concept to production for agentic experiences from months to weeks (12:59).
- The Python SDK has seen over 5 million downloads since its public preview release in May (13:17).
Controlling and Steering Agents
While model-driven, there's a spectrum of control from free reign to highly constrained (19:22).
SOPs (Standard Operating Procedures)
A technique for structured system prompting using natural language instructions with keywords like "must" and "should" to guide the agent's steps (21:16, 24:13). SOPs are a technique developed internally at Amazon that has been shared with the public.
Steering (Modular Prompting)
An experimental feature allowing critical instructions to be injected via life cycle hooks, reducing upfront prompt size and making models more token efficient (25:34). It uses an external judge (e.g., an LLM or deterministic library) to evaluate the agent's trajectory and provide feedback for self-correction (26:00).
Constraints
Applied through hooks (e.g., PII guard rails after tool calls) or external policy engines (e.g., Agent Core's gateway product for deterministic access control before tool calls) to protect sensitive outcomes (23:29).
Evaluations Library
Launched to help developers evaluate agents and build evaluators, including features for generating synthetic datasets (27:28).
Building the TypeScript SDK with Agents
- The TypeScript SDK itself was built with Strands, not through "vibe coding" (28:45).
- The team focused on improving the developer experience of writing code with agents (29:50).
- Early AI-generated coding tools often produced low-quality code requiring significant human review and steering (30:26).
- Agent SOPs were crucial for writing better code by providing repeatable behavior and making agents debuggable (31:28). This approach, developed by James Hood, has been widely successful internally at Amazon (31:54).
Prompt-Driven Development SOP
Guides an agent through a Q&A process to refine an idea into an implementable task, helping to avoid the shortest, unintended path to a solution (34:15).
Code Assist SOP
Takes the implementable task and implements it using a test-driven development methodology, allowing for iterative feedback (35:20).
Writing Code Faster
Achieved by integrating agents into GitHub workflows, making GitHub a "member of the team" (36:03).
Python Strands agents are triggered by GitHub Actions (on pull requests and issues) to contribute code to the TypeScript repository, reducing the developer's role as a "middleman" (37:10).
Refiner Agent
Reads a GitHub issue with a feature request, explores the codebase, and asks clarifying questions as comments on the issue (38:10, 39:43).
Implement Agent
Takes the refined issue, creates a branch, implements the code using TDD, and creates a pull request, allowing for feedback via PR comments (38:55).
Important Resources
Presenters: Ryan Coleman and Nicholas Clegg