Releases: mastra-ai/mastra
2025-11-27
Highlights
Stream nested execution context from Workflows and Networks to your UI
Agent responses now stream live through workflows and networks, with complete execution metadata flowing to your UI.
In workflows, pipe agent streams directly through steps:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});In networks, each step now tracks properly—unique IDs, iteration counts, task info, and agent handoffs all flow through with correct sequencing. No more duplicated steps or missing metadata.
Both surface text chunks, tool calls, and results as they happen, so users see progress in real time instead of waiting for the full response.
AI-SDK voice models are now supported
CompositeVoice now accepts AI SDK voice models directly—use OpenAI for transcription, ElevenLabs for speech, or any combination you want.
import { CompositeVoice } from "@mastra/core/voice";
import { openai } from "@ai-sdk/openai";
import { elevenlabs } from "@ai-sdk/elevenlabs";
const voice = new CompositeVoice({
input: openai.transcription('whisper-1'),
output: elevenlabs.speech('eleven_turbo_v2'),
});
const audio = await voice.speak("Hello from AI SDK!");
const transcript = await voice.listen(audio);Works with OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and more. AI SDK models are automatically wrapped, so you can swap providers without changing your code.
Changelog
@mastra/ai-sdk
-
Support streaming agent text chunks from workflow-step-output
Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.
Features:
- Added
includeTextStreamPartsoption toWorkflowStreamToAISDKTransformer(defaults totrue) - Added
isMastraTextStreamChunktype guard to identify Mastra chunks with text streaming data - Support for streaming text chunks:
text-start,text-delta,text-end - Support for streaming tool calls:
tool-call,tool-result - Comprehensive test coverage in
transformers.test.ts - Updated documentation for workflow streaming and
workflowRoute()
Example:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});When served via workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience. (#10568)
-
Fix chat route to use agent ID instead of agent name for resolution. The
/chat/:agentIdendpoint now correctly resolves agents by their ID property (e.g.,weather-agent) instead of requiring the camelCase variable name (e.g.,weatherAgent). This fixes issue #10469 where URLs like/chat/weather-agentwould return 404 errors. (#10565) -
Fixes propagation of custom data chunks from nested workflows in branches to the root stream when using
toAISdkV5Streamwith{from: 'workflow'}.Previously, when a nested workflow within a branch used
writer.custom()to write data-* chunks, those chunks were wrapped inworkflow-step-outputevents and not extracted, causing them to be dropped from the root stream.Changes:
- Added handling for
workflow-step-outputchunks intransformWorkflow()to extract and propagate data-* chunks
- Added handling for
-
When a
workflow-step-outputchunk contains a data-* chunk in itspayload.output, the transformer now extracts it and returns it directly to the root stream -
Added comprehensive test coverage for nested workflows with branches and custom data propagation
This ensures that custom data chunks written via
writer.custom()in nested workflows (especially those within branches) are properly propagated to the root stream, allowing consumers to receive progress updates, metrics, and other custom data from nested workflow steps. (#10447) -
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
- Enhanced step tracking in
AgentNetworkToAISDKTransformerto properly maintain step state throughout execution lifecycle
- Enhanced step tracking in
-
Steps are now identified by unique IDs and updated in place rather than creating duplicates
-
Added proper iteration and task metadata to each step in the network execution flow
-
Fixed agent, workflow, and tool execution events to correctly populate step data
-
Updated network stream event types to include
networkId,workflowId, and consistentrunIdtracking -
Added test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
-
[0.x] Make workflowRoute includeTextStreamParts option default to false (#10574)
-
Add support for tool-call-approval and tool-call-suspended events in chatRoute (#10205)
-
Backports the
messageMetadataandonErrorsupport from PR #10313 to the 0.x branch, adding these features totoAISdkFormatfunction.- Added
messageMetadataparameter totoAISdkFormatoptions - Function receives the current stream part and returns metadata to attach to start and finish chunks
- Metadata is included in
startandfinishchunks when provided
- Added
-
Added
onErrorparameter totoAISdkFormatoptions- Allows custom error handling during stream conversion
- Falls back to
safeParseErrorObjectutility when not provided
-
Added
safeParseErrorObjectutility function for error parsing -
Updated
AgentStreamToAISDKTransformerto accept and usemessageMetadataandonError -
Updated JSDoc documentation with parameter descriptions and usage examples
-
Added comprehensive test suite for
messageMetadatafunctionality (6 test cases) -
Fixed existing test file to use
toAISdkFormatinstead of removedtoAISdkV5Stream- All existing tests pass (14 tests across 3 test files)
-
New tests verify:
-
messageMetadatais called with correct part structure -
Metadata is included in start and finish chunks
-
Proper handling when
messageMetadatais not provided or returns null/undefined -
Function is called for each relevant part in the stream
-
-
Fixed workflow routes to properly receive request context from middleware. This aligns the behavior of
workflowRoutewithchatRoute, ensuring that context set in middleware is consistently forwarded to workflows.When both middleware and request body provide a request context, the middleware value now takes precedence, and a warning is emitted to help identify potential conflicts.
@mastra/astra
@mastra/auth-clerk
- remove organization requirement from default authorization (#10551)
@mastra/chroma
@mastra/clickhouse
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/cloudflare
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in listMessages when using semantic recall (include parameter). Messages are now always sorted by createdAt instead of storage order, ensuring correct chronological ordering of conversation history. (#10545)
@mastra/cloudflare-d1
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messa...
2025-11-19
Highlights
Generate Endpoint Fix for OpenAI Streaming
We've switched to using proper generate endpoints for model calls, fixing a critical permission issue with OpenAI streaming. No more 403 errors when your users don't have full model permissions - the generate endpoint respects granular API key scopes properly.
AI SDK v5: Fine-Grained Stream Control
Building custom UIs? You now have complete control over what gets sent in your AI SDK streams. Configure exactly which message chunks your frontend receives with the new sendStart, sendFinish, sendReasoning, and sendSources options.
Changelog
@mastra/ai-sdk
-
Add sendStart, sendFinish, sendReasoning, and sendSources options to toAISdkV5Stream function, allowing fine-grained control over which message chunks are included in the converted stream. Previously, these values were hardcoded in the transformer.
BREAKING CHANGE: AgentStreamToAISDKTransformer now accepts an options object instead of a single lastMessageId parameter
Also, add sendStart, sendFinish, sendReasoning, and sendSources parameters to
chatRoute function, enabling fine-grained control over which chunks are
included in the AI SDK stream output. (#10127) -
Added support for tripwire data chunks in streaming responses.
Tripwire chunks allow the AI SDK to emit special data events when certain conditions are triggered during stream processing. These chunks include a
tripwireReasonfield explaining why the tripwire was activated.Usage
When converting Mastra chunks to AI SDK v5 format, tripwire chunks are now automatically handled:
// Tripwire chunks are converted to data-tripwire format
const chunk = {
type: 'tripwire',
payload: { tripwireReason: 'Rate limit approaching' }
};
// Converts to:
{
type: 'data-tripwire',
data: { tripwireReason: 'Rate limit approaching' }
}(#10269)
@mastra/auth
- Allow provider to pass through options to the auth config (#10284)
@mastra/auth-auth0
- Allow provider to pass through options to the auth config (#10284)
@mastra/auth-clerk
- Allow provider to pass through options to the auth config (#10284)
@mastra/auth-firebase
- Allow provider to pass through options to the auth config (#10284)
@mastra/auth-supabase
- Allow provider to pass through options to the auth config (#10284)
@mastra/auth-workos
- Allow provider to pass through options to the auth config (#10284)
@mastra/client-js
- Added optional
descriptionfield toGetAgentResponseto support richer agent metadata (#10305)
@mastra/core
-
Only handle download image asset transformation if needed (#10122)
-
Fix tool outputSchema validation to allow unsupported Zod types like ZodTuple. The outputSchema is only used for internal validation and never sent to the LLM, so model compatibility checks are not needed. (#9409)
-
Fix vector definition to fix pinecone (#10150)
-
Add type bailed to workflowRunStatus (#10091)
-
Allow provider to pass through options to the auth config (#10284)
-
Fix deprecation warning when agent network executes workflows by using
.fullStreaminstead of iteratingWorkflowRunOutputdirectly (#10306) -
Add support for doGenerate in LanguageModelV2. This change fixes issues with OpenAI stream permissions.
- Added new abstraction over LanguageModelV2 (#10239)
@mastra/mcp
- Add timeout configuration to mcp server config (#9891)
@mastra/mcp-docs-server
- Add migration tool to mcp docs server for stable branch that will let users know to upgrade mcp docs server @latest to @beta to get the proper migration tool. (#10200)
@mastra/server
- Network handler now accesses thread and resource parameters from the nested memory object instead of directly from request body. (#10294)
@mastra/observability
- Updates console warning when cloud access token env is not set. (#9149)
@mastra/pinecone
- Adjust pinecone settings (#10182)
@mastra/playground-ui
- Fix scorer filtering for SpanScoring, add error and info message for user (#10160)
@mastra/voice-google-gemini-live
- gemini live fix (#10234)
- fix(voice): Fix Vertex AI WebSocket connection failures in GeminiLiveVoice (#10243)
create-mastra
- fix: detect bun runtime and cleanup on failure (#10307)
mastra
2025-11-14
Highlights
1.0 Beta is ready!
We've worked hard on a 1.0 beta version to signal that Mastra is ready for prime time and there will not be any breaking changes in the near future. Please visit the migration guide to get started.
Improved support for files in models
We added the ability not to download images or any supported files by the model, and instead send the raw URL so it can handle it on its own. This improves the speed of the LLM call.
Mistral
Added improved support for Mistral by using the ai-sdk provider under the hood instead of the openai compat provider.
Changelog
@mastra/ai-sdk
-
Fix bad dane change in 0.x workflowRoute (#10090)
-
Improve ai-sdk transformers, handle custom data from agent sub workflow, sug agent tools (#10026)
-
Extend the workflow route to accept optional runId and resourceId parameters, allowing clients to specify custom identifiers when creating workflow runs. These parameters are now properly validated in the OpenAPI schema and passed through to the createRun method.
Also updates the OpenAPI schema to include previously undocumented
resumeData and step fields. (#10034)
@mastra/client-js
- Fix clientTools execution in client js (#9880)
@mastra/core
-
Integrates the native Mistral AI SDK provider (
@ai-sdk/mistral) to replace the current OpenAI-compatible endpoint implementation for Mistral models. (#9789) -
Fix: Don't download unsupported media (#9209)
-
Use a shared
getAllToolPaths()method from the bundler to discover tool paths. (#9204) -
Add an additional check to determine whether the model natively supports specific file types. Only download the file if the model does not support it natively. (#9790)
-
Fix agent network iteration counter bug causing infinite loops
The iteration counter in agent networks was stuck at 0 due to a faulty ternary operator that treated 0 as falsy. This prevented
maxStepsfrom working correctly, causing infinite loops when the routing agent kept selecting primitives instead of returning "none".Changes:
-
Fixed iteration counter logic in
loop/network/index.tsfrom(inputData.iteration ? inputData.iteration : -1) + 1to(inputData.iteration ?? -1) + 1 -
Changed initial iteration value from
0to-1so first iteration correctly starts at 0 -
Added
checkIterations()helper to validate iteration counting in all network tests -
Exposes requiresAuth to custom api routes (#9952)
-
Fix agent network working memory tool routing. Memory tools are now included in routing agent instructions but excluded from its direct tool calls, allowing the routing agent to properly route to tool execution steps for memory updates. (#9428)
-
Fixes assets not being downloaded when available (#10079)
@mastra/deployer
- Added /health endpoint for service monitoring (#9142)
- Use a shared
getAllToolPaths()method from the bundler to discover tool paths. (#9204)
@mastra/deployer-cloud
- Use a shared
getAllToolPaths()method from the bundler to discover tool paths. (#9204)
@mastra/evals
- Remove difflib (#9756)
@mastra/mssql
- Prevents double stringification for MSSQL jsonb columns by reusing incoming strings that already contain valid JSON while still stringifying other inputs as needed. (#9901)
mastra
2025-11-05
Highlights
This release focuses primarily on bug fixes and stability improvements.
AI-SDK
We've resolved several issues related to message deduplication and preserving lastMessageIds. More importantly, this release adds support for suspend/resume operations and custom data writes, with network data now properly surfacing as data-parts.
Bundling
We've fully resolved bundling issues with the reflect-metadata package by ensuring it's not removed during the bundling step. This means packages no longer need to be marked as externals to avoid runtime crashes in the Mastra server.
Changelog
@mastra/agent-builder
- update peerdeps (5ca1cca)
@mastra/ai-sdk
- update peerdeps (5ca1cca)
- Preserve lastMessageId in chatRoute (#9556)
- Handle custom data writes in agent network execution events in ai sdk transformers (#9717)
- Add support for suspend/resume in AI SDK workflowRoute (#9392)
@mastra/arize
- update peerdeps (5ca1cca)
@mastra/astra
- update peerdeps (5ca1cca)
@mastra/auth
- update peerdeps (5ca1cca)
@mastra/auth-auth0
- update peerdeps (5ca1cca)
@mastra/auth-clerk
- update peerdeps (5ca1cca)
@mastra/auth-firebase
- update peerdeps (5ca1cca)
@mastra/auth-supabase
- update peerdeps (5ca1cca)
@mastra/auth-workos
- update peerdeps (5ca1cca)
@mastra/braintrust
- update peerdeps (5ca1cca)
@mastra/chroma
- update peerdeps (5ca1cca)
@mastra/clickhouse
- update peerdeps (5ca1cca)
@mastra/client-js
- update peerdeps (5ca1cca)
- Fixes issue where clicking the reset button in the model picker would fail to restore the original LanguageModelV2 (or any other types) object that was passed during agent construction. (#9487)
- Remove unused /model-providers API (#9533)
- Fix undefined runtimeContext using memory from playground (#9328)
@mastra/cloud
- update peerdeps (5ca1cca)
@mastra/cloudflare
- update peerdeps (5ca1cca)
@mastra/cloudflare-d1
- update peerdeps (5ca1cca)
@mastra/core
-
update peerdeps (5ca1cca)
-
Fix workflow input property preservation after resume from snapshot
Ensure that when resuming a workflow from a snapshot, the input property is correctly set from the snapshot's context input rather than from resume data. This prevents the loss of original workflow input data during suspend/resume cycles. (#9380)
-
Fix a bug where streaming didn't output the final chunk (#9546)
-
Fixes issue where clicking the reset button in the model picker would fail to restore the original LanguageModelV2 (or any other types) object that was passed during agent construction. (#9487)
-
Fix network routing agent smoothstreaming (#9247)
@mastra/couchbase
- update peerdeps (5ca1cca)
@mastra/dane
- update peerdeps (5ca1cca)
@mastra/deployer
-
update peerdeps (5ca1cca)
-
Improve analyze recursion in bundler when using monorepos (#9490)
-
Update peer dependencies to match core package version bump (0.23.4) (#9487)
-
Fixes issue where clicking the reset button in the model picker would fail to restore the original LanguageModelV2 (or any other types) object that was passed during agent construction. (#9487)
-
Make sure external deps are built with side-effects. Fixes an issue with reflect-metadata #7328 (#9714)
-
Remove unused /model-providers API (#9533)
-
Fix undefined runtimeContext using memory from playground (#9328)
-
Add readable-streams to global externals, not compatible with CJS compilation (#9735)
-
fix: add /api route to default public routes to allow unauthenticated
accessThe /api route was returning 401 instead of 200 because it was being caught
by the /api/_ protected pattern. Adding it to the default public routes
ensures the root API endpoint is accessible without authentication while
keeping /api/_ routes protected. (#9662)
@mastra/deployer-cloud
- update peerdeps (5ca1cca)
@mastra/deployer-cloudflare
- update peerdeps (5ca1cca)
@mastra/deployer-netlify
- update peerdeps (5ca1cca)
@mastra/deployer-vercel
- update peerdeps (5ca1cca)
@mastra/dynamodb
- update peerdeps (5ca1cca)
@mastra/evals
- update peerdeps (5ca1cca)
@mastra/fastembed
- update peerdeps (5ca1cca)
@mastra/google-cloud-pubsub
- update peerdeps (5ca1cca)
@mastra/inngest
- update peerdeps (5ca1cca)
@mastra/lance
- update peerdeps (5ca1cca)
@mastra/langfuse
- update peerdeps (5ca1cca)
@mastra/langsmith
- update peerdeps (5ca1cca)
@mastra/libsql
- update peerdeps (5ca1cca)
@mastra/loggers
- update peerdeps (5ca1cca)
@mastra/longmemeval
- update peerdeps (5ca1cca)
@mastra/mcp
- update peerdeps (5ca1cca)
@mastra/mcp-docs-server
- update peerdeps (5ca1cca)
@mastra/mcp-registry-registry
- update peerdeps (5ca1cca)
@mastra/memory
- update peerdeps (5ca1cca)
@mastra/mongodb
- update peerdeps (5ca1cca)
@mastra/mssql
- update peerdeps (5ca1cca)
@mastra/observability
- update peerdeps (5ca1cca)
@mastra/opensearch
- update peerdeps (5ca1cca)
@mastra/otel-exporter
- update peerdeps (5ca1cca)
@...
2025-10-28
Highlights
Tool Schema Validation
Fixed a critical bug in @mastra/core where tool input validation used the original Zod schema while LLMs received a transformed version. This caused validation failures with models like OpenAI o3 and Claude 3.5 Haiku that send valid responses matching the transformed schema (e.g., converting .optional() to .nullable()).
Changelog
@mastra/ai-sdk
- Fix usage tracking with agent network (#9226)
@mastra/arize
-
Fixed import isssues in exporters. (#9331)
-
fix(@mastra/arize): Auto-detect arize endpoint when endpoint field is not provided
When spaceId is provided to ArizeExporter constructor, and endpoint is not, pre-populate endpoint with default ArizeAX endpoint. (#9250)
@mastra/braintrust
- Fixed import isssues in exporters. (#9331)
@mastra/core
-
Fix agent onChunk callback receiving wrapped chunk instead of direct chunk (#9402)
-
Ensure model_generation spans end before agent_run spans. (#9393)
-
Fix OpenAI schema validation errors in processors (#9400)
-
Don't call
os.homedir()at top level (but lazy invoke it) to accommodate sandboxed environments (#9211) -
Detect thenable objects returned by AI model providers (#8905)
-
Bug fix: Use input processors that are passed in generate or stream agent options rather than always defaulting to the processors set on the Agent class. (#9407)
-
Fix tool input validation to use schema-compat transformed schemas
Previously, tool input validation used the original Zod schema while the LLM received a schema-compat transformed version. This caused validation failures when LLMs (like OpenAI o3 or Claude 3.5 Haiku) sent arguments matching the transformed schema but not the original.
For example:
-
OpenAI o3 reasoning models convert
.optional()to.nullable(), sendingnullvalues -
Claude 3.5 Haiku strips
min/maxstring constraints, sending shorter strings -
Validation would reject these valid responses because it checked against the original schema
The fix ensures validation uses the same schema-compat processed schema that was sent to the LLM, eliminating this mismatch. (#9258)
-
Add import for WritableStream in execution-engine and dedupe llm.getModel in agent.ts (#9185)
-
pass writableStream parameter to workflow execution (#9139)
-
Save correct status in snapshot for all workflow parallel steps.
This ensures when you poll workflow run result usinggetWorkflowRunExecutionResult(runId), you get the right status for all parallel steps (#9379) -
Add ability to pass agent options when wrapping an agent with createStep. This allows configuring agent execution settings when using agents as workflow steps. (#9199)
-
Fix network loop iteration counter and usage promise handling:
- Fixed iteration counter in network loop that was stuck at 0 due to falsy check. Properly handled zero values to ensure maxSteps is correctly enforced.
-
Fixed usage promise resolution in RunOutput stream by properly resolving or rejecting the promise on stream close, preventing hanging promises when streams complete. (#9408)
-
Workflow validation zod v4 support (#9319)
-
Fix usage tracking with agent network (#9226)
@mastra/deployer
- Add exportConditions options to nodeResolve plugin to ensure proper handling of Node.js export condition resolution during production builds. (#9394)
- Add better error handling during
mastra buildforERR_MODULE_NOT_FOUNDcases. (#9127)
@mastra/deployer-netlify
- Do not apply ESM shim to output as Netlify should handle this already (#9239)
@mastra/inngest
- Fix Inngest workflow tests by adding missing imports and updating middleware path. (#9259)
@mastra/lance
- Fix eval filtering to use NULL checks instead of length function for compatibility with LanceDB 0.22.x (#9191)
@mastra/langfuse
- Fixed import isssues in exporters. (#9331)
@mastra/langsmith
- Fixed import isssues in exporters. (#9331)
@mastra/mssql
-
Implemented AI tracing and observability features
- Added createAISpan, updateAISpan, getAITrace, getAITracesPaginated
-
Added batchCreateAISpans, batchUpdateAISpans, batchDeleteAITraces
-
Automatic performance indexes for AI spans
Implemented workflow update methods
- Added updateWorkflowResults with row-level locking (UPDLOCK, HOLDLOCK)
-
Added updateWorkflowState with row-level locking
-
Concurrent update protection for parallel workflow execution
Added index management API
- Added createIndex, listIndexes, describeIndex, dropIndex methods
-
Exposed index management methods directly on store instance
-
Support for composite indexes, unique constraints, and filtered indexes
Documentation improvements
- Comprehensive README with complete API reference (58 methods)
-
Detailed feature descriptions for all storage capabilities
-
Index management examples and best practices
-
Updated to reflect all atomic transaction usage (#9280)
@mastra/observability
- Fixed import isssues in exporters. (#9331)
@mastra/otel-exporter
- Fixed import isssues in exporters. (#9331)
@mastra/playground-ui
- Update MainSidebar component to fit required changes in Cloud CTA link (#9318)
- Render zod unions and discriminated unions correctly in dynamic form. (#9317)
- Extract more components to playground-ui for sharing with cloud (#9241)
- Move some components to playground-ui for usage in cloud (#9177)
@mastra/schema-compat
-
Fix Zod v4 toJSONSchema bug with z.record() single-argument form
Zod v4 has a bug in the single-argument form of
z.record(valueSchema)where it incorrectly assigns the value schema tokeyTypeinstead ofvalueType, leavingvalueTypeundefined. This causestoJSONSchema()to throw "Cannot read properties of undefined (reading '_zod')" when processing schemas containingz.record()fields.This fix patches affected schemas before conversion by detecting records with missing
valueTypeand correctly assigning the schema tovalueTypewhile settingkeyTypetoz.string()(the default). The patch recursively handles nested schemas including those wrapped in.optional(),.nullable(), arrays, unions, and objects. (#9265) -
Improved reliability of string field types in tool schema compatibility (#9266)
create-mastra
- Update MainSidebar component to fit required changes in Cloud CTA link (#9318)
mastra
- Use dynamic model for scorers in create cli (#9188)
- Update MainSidebar component to fit required changes in Cloud CTA link (#9318)
- Better handle errors during
mastra startand throw them with Mastra's logger. Also add special error handling forERR_MODULE_NOT_FOUNDcases. (#9127) - Make sure that
mastra initalso installs themastraCLI package (if not already installed) (#9179)
2025-10-21
Changelog
@mastra/agent-builder
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/ai-sdk
- Pass original messages in chatRoute to fix uiMessages duplication #8830 (#8904)
- network routing agent text delta ai-sdk streaming (#8979)
- Support writing custom top level stream chunks (#8922)
- Refactor workflowstream into workflow output with fullStream property (#9048)
- Update peerdeps to 0.23.0-0 (#9043)
- Fix streaming of custom chunks, workflow & network support (#9109)
@mastra/arize
-
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated
TraceExporterinheriting class intoOtelExporter.
This will circumvent the default package detection, no longer instantiating aTraceExporter
automatically if one is instead passed in to theOtelExporterconstructor.feat(arize): Initial release of @mastra/arize observability package
The
@mastra/arizepackage exports anArizeExporterclass that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces usesBatchSpanProcessorover OTLP connections.
It leverages the@mastra/otel-exporterpackage, reusingOtelExporterfor transmission and
span management.
See the README inobservability/arize/README.mdfor more details (#8827) -
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
@mastra/astra
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/braintrust
-
Update peerdeps to 0.23.0-0 (#9043)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/chroma
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/clickhouse
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/client-js
- Add tool call approval (#8649)
- Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
@mastra/cloud
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/cloudflare
- Support for custom resume labels mapping to step to be resumed (#8941)
- Update peer dependencies to match core package version bump (0.21.2) (#8941)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/cloudflare-d1
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/core
-
Update provider registry and model documentation with latest models and providers (c67ca32)
-
Update provider registry and model documentation with latest models and providers (efb5ed9)
-
Add deprecation warnings for format:ai-sdk (#9018)
-
network routing agent text delta ai-sdk streaming (#8979)
-
Support writing custom top level stream chunks (#8922)
-
Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
-
Fix incorrect type assertions in Tool class. Created
MastraToolInvocationOptionstype to properly extend AI SDK'sToolInvocationOptionswith Mastra-specific properties (suspend,resumeData,writableStream). Removed unsafe type assertions from tool execution code. (#8510) -
fix(core): Fix Gemini message ordering validation errors (#7287, #8053)
Fixes Gemini API "single turn requests" validation error by ensuring the first non-system message is from the user role. This resolves errors when:
-
Messages start with assistant role (e.g., from memory truncation)
-
Tool-call sequences begin with assistant messages
Breaking Change: Empty or system-only message lists now throw an error instead of adding a placeholder user message, preventing confusing LLM responses.
This fix handles both issue #7287 (tool-call ordering) and #8053 (single-turn validation) by inserting a placeholder user message when needed. (#7287)
-
Add support for external trace and parent span IDs in TracingOptions. This enables integration with external tracing systems by allowing new AI traces to be started with existing traceId and parentSpanId values. The implementation includes OpenTelemetry-compatible ID validation (32 hex chars for trace IDs, 16 hex chars for span IDs). (#9053)
-
Updated
watchandwatchAsyncmethods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048) -
Fix tracing context propagation to agent steps in workflows
When creating a workflow step from an agent using
createStep(myAgent), the tracing context was not being passed to the agent'sstream()andstreamLegacy()methods. This caused tracing spans to break in the workflow chain.This fix ensures that
tracingContextis properly propagated to both agent.stream() and agent.streamLegacy() calls, matching the behavior of tool steps which already propagate tracingContext correctly. (#9074) -
Fixes how reasoning chunks are stored in memory to prevent data loss and ensure they are consolidated as single message parts rather than split into word-level fragments. (#9041)
-
fixes an issue where input processors couldn't add system or assistant messages. Previously all messages from input processors were forced to be user messages, causing an error when trying to add other role types. (#8835)
-
fix(core): Validate structured output at text-end instead of flush
Fixes structured output validation for Bedrock and LMStudio by moving validation from
flush()totext-endchunk. EliminatesfinishReasonheuristics, adds special token extraction for LMStudio, and validates at the correct point in stream lifecycle. (#8934) -
fix model.loop.test.ts tests to use structuredOutput.schema and add assertions (#8926)
-
Add
initialStateas an option to.streamVNext()(#9071) -
added resourceId and runId to workflow_run metadata in ai tracing (#9031)
-
When using OpenAI models with JSON response format, automatically enable strict schema validation. (#8924)
-
Fix custom metadata preservation in UIMessages when loading threads. The
getMessagesHandlernow convertsmessagesV2(V2 format with metadata) instead ofmessages(V1 format without metadata) to AIV5.UI format. Also updates the abstractMastraMemory.query()return type to includemessagesV2for proper type safety. (#8938) -
Fix TypeScript type errors when using provider-defined tools from external AI SDK packages.
Agents can now accept provider tools like
google.tools.googleSearch()without type errors. Creates new@internal/external-typespackage to centralize AI SDK type re-exports and addsProviderDefinedToolstructural type to handle tools from different package versions/instances due to TypeScript's module path discrimination. (#8940) -
feat(ai-tracing): Add automatic metadata extraction from RuntimeContext to spans
Enables automatic extraction of RuntimeContext values as metadata for AI tracing spans across entire traces.
Key features:
-
Configure
runtimeContextKeysin TracingConfig to extract specific keys from RuntimeContext -
Add per-request keys via
tracingOptions.runtimeContextKeysfor trace-specific additions -
Supports dot notation for nested values (e...
2025-10-14
Highlights
Model Routing everywhere
Model configuration has been unified across @mastra/core, @mastra/evals, and related packages, with all components now accepting the same flexible Model Configuration. This enables consistent model specification using magic strings ("openai/gpt-4o"), config objects with custom URLs, or dynamic resolution functions across scorers, processors, and relevance scoring components.
// All of these now work everywhere models are accepted
const scorer = createScorer({
judge: { model: "openai/gpt-4o" } // Magic string
});
const processor = new ModerationProcessor({
model: { id: "custom/model", url: "https://..." } // Custom config
});
const relevanceScorer = new MastraAgentRelevanceScorer(
async (ctx) => ctx.getModel() // Dynamic function
);AI SDK v5 Compatibility & Streaming
We've revamped the AI-SDK documentation. You can now use the useChat hook on Networks and Workflows. When you're using Agents and Workflows as a tool, you will receive a custom data component that allows you to render a tailored Tool Widget containing all the necessary information.
"use client";
import { useChat } from "@ai-sdk/react";
import { AgentTool } from '../ui/agent-tool';
import type { AgentDataPart } from "@mastra/ai-sdk";
export default function Page() {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
}),
});
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.parts.map((part, i) => {
switch (part.type) {
case 'data-tool-agent':
return (
<AgentTool {...part.data as AgentDataPart} key={`${message.id}-${i}`} />
);
default:
return null;
}
})}
</div>
))}
</div>
);
}Build System Changes
We've updated the build pipeline to better support typescript packages in workspaces. We now detect packages that we cannot build, mostly binary modules, and provide a log with instructions on how to do so.
Changelog
@mastra/agent-builder
- Update structuredOutput to use response format by default with an opt in to json prompt injection.
Replaced internal usage of output with structuredOutput. (#8557) - Update peer dependencies to match core package version bump (0.21.0) (#8557)
@mastra/ai-sdk
- pass runtimeContext to agent stream options in chatRoute (#8641)
- Improve types for networkRoute and workflowRoute functions (#8844)
- ai-sdk workflow route, agent network route (#8672)
- nested ai-sdk workflows and networks streaming support (#8614)
@mastra/astra
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/braintrust
- add traceId as root_span_id for braintrust traces (#8821)
- preserve Mastra span id when exported to Braintrust (#8714)
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/chroma
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/clickhouse
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/client-js
- support model router in structured output and client-js (#8686)
- Make sure to convert the agent instructions when showing them (#8702)
@mastra/cloud
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/cloudflare
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/cloudflare-d1
- Update peer dependencies to match core package version bump (0.21.0) (#8619)
- Update peer dependencies to match core package version bump (0.21.0) (#8557)
- Update peer dependencies to match core package version bump (0.21.0) (#8626)
- Update peer dependencies to match core package version bump (0.21.0) (#8686)
@mastra/core
-
Fix aisdk format in workflow breaking stream (#8716)
-
Standardize model configuration across all Mastra components
All model configuration points now accept the same flexible
MastraModelConfigtype as theAgentclass:- Scorers: Judge models now support magic strings, config objects, and dynamic functions
- Input/Output Processors: ModerationProcessor and PIIDetector accept flexible model configs
- Relevance Scorers: MastraAgentRelevanceScorer supports all model config types
This change provides:
- Consistent API across all components
- Support for magic strings (e.g.,
"openai/gpt-4o") - Support for OpenAI-compatible configs with custom URLs
- Support for dynamic model resolution functions
- Full backward compatibility with existing code
Example:
// All of these now work everywhere models are accepted
const scorer = createScorer({
judge: { model: "openai/gpt-4o" } // Magic string
});
const processor = new ModerationProcessor({
model: { id: "custom/model", url: "https://..." } // Custom config
});
const relevanceScorer = new MastraAgentRelevanceScorer(
async (ctx) => ctx.getModel() // Dynamic function
);(#8626)
- fix: preserve providerOptions through message list conversions (#8836)
- improve error propagation in agent stream failures (#8733)
- prevent duplicate deprecation warning logs and deprecate modelSettings.abortSignal in favor of top-level abortSignal (#8840)
- Removed logging of massive model objects in tool failures (#8839)
- Create unified Sidebar component to use on Playground and Cloud (#8655)
- Added tracing of input & output processors (this includes using structuredOutput) (#8623)
- support model router in structured output and client-js (#8686)
- ai-sdk workflow route, agent network route (#8672)
- Handle maxRetries in agent.generate/stream properly. Add deprecation warning to top level abortSignal in AgentExecuteOptions as that property is duplicated inside of modelSettings ...
2025-10-08
Highlights
Workflows
Workflows now support global state, you can now read state in each of your defined steps and set it withsetState. This makes it easier to manage state over multiple steps nstead of passing it through input/output variables.
const firstStep = createStep({
id: "first-step",
execute({ setState }) {
setState({
myValue: "a value",
});
},
});
const secondStep = createStep({
id: "second-step",
execute({ state }) {
console.log(state.myValue);
},
});
createWorkflow({
id: "my-worfklow",
stateSchema: z.object({
myValue: z.string(),
}),
}).then(myStep);Memory
Working memory can be stored using thread metadata. It allows you to set the initial wokring memory directly.
const thread = await memory.createThread({
threadId: "thread-123",
resourceId: "user-456",
title: "Medical Consultation",
metadata: {
workingMemory: `# Patient Profile
- Name: John Doe
- Blood Type: O+
- Allergies: Penicillin
- Current Medications: None
- Medical History: Hypertension (controlled)`,
},
});UI (ai-sdk compatability)
Improved useChat support from ai-sdk if you're using agents in your tools. You get a custom UI message called data-tool-agent with all relevant information.
// in src/mastra.ts
export const mastra = new Mastra({
server: {
apiRoutes: [
chatRoute({
path: "/chat",
agent: "my-agent",
}),
],
},
});// in my useChat file
const { error, status, sendMessage, messages, regenerate, stop } =
useChat<MyMessage>({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
body: {
}
}),
});
return (
<div className="flex flex-col pt-24 mx-auto w-full max-w-4xl h-screen">
<div className="flex flex-row mx-auto w-full overflow-y-auto gap-4">
<div className="flex-1">
{messages.map(message => {
return (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}{' '}
{message.parts
.filter(part => part.type === 'data-tool-agent')
.map((part) => {
return <CustomWidget key={part.id} {...part.data} />
})}
{message.parts
.filter(part => part.type === 'text')
.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}
})
}
)
}}
</div>
</div>
</div>
)Changelog
@mastra/agent-builder
-
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
@mastra/ai-sdk
@mastra/chroma
- dependencies updates:
- Updated dependency
chromadb@^3.0.17↗︎ (from^3.0.15, independencies) (#8554)
- Updated dependency
@mastra/client-js
@mastra/core
-
workflow run thread more visible (#8539)
-
Add iterationCount to loop condition params (#8579)
-
Mutable shared workflow run state (#8545)
-
avoid refetching memory threads and messages on window focus (#8519)
-
add tripwire reason in playground (#8568)
-
Add validation for index creation (#8552)
-
Save waiting step status in snapshot (#8576)
-
Added AI SDK provider packages to model router for anthropic/google/openai/openrouter/xai (#8559)
-
type fixes and missing changeset (#8545)
-
Convert WorkflowWatchResult to WorkflowResult in workflow graph (#8541)
-
add new deploy to cloud button (#8549)
-
remove icons in entity lists (#8520)
-
add client search to all entities (#8523)
-
Improve JSDoc documentation for Agent (#8389)
-
Properly fix cloudflare randomUUID in global scope issue (#8450)
-
Marked OTEL based telemetry as deprecated. (#8586)
-
Add support for streaming nested agent tools (#8580)
-
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
-
UX for the agents page (#8517)
-
add icons into playground titles + a link to the entity doc (#8518)
@mastra/dane
- Mutable shared workflow run state (#8545)
@mastra/deployer
-
fix: custom API routes now properly respect authentication requirements
Fixed a critical bug where custom routes were bypassing authentication when they should have been protected by default. The issue was in the
isProtectedPathfunction which only checked pattern-based protection but ignored custom route configurations.- Custom routes are now protected by default or when specified with
requiresAuth: true
- Custom routes are now protected by default or when specified with
-
Custom routes properly inherit protection from parent patterns (like
/api/*) -
Routes with explicit
requiresAuth: falsecontinue to work as public endpoints -
Enhanced
isProtectedPathto consider both pattern matching and custom route auth configThis fixes issue #8421 where custom routes were not being properly protected by the authentication system. (#8469)
-
Correctly handle errors in streams. Errors (e.g. rate limiting) before the stream begins are now returned with their code. Mid-stream errors are passed as a chunk (with
type: 'error') to the stream. (#8567) -
Mutable shared workflow run state (#8545)
-
Fix bug when lodash dependencies where used in subdependencies (#8537)
@mastra/deployer-cloud
@mastra/deployer-cloudflare
- Mutable shared workflow run state (#8545)
- Properly fix cloudflare randomUUID in global scope issue (#8450)
@mastra/deployer-netlify
- Mutable shared workflow run state (#8545)
@mastra/deployer-vercel
- Mutable shared workflow run state (#8545)
@mastra/dynamodb
- dependencies updates:
- Updated dependency
@aws-sdk/client-dynamodb@^3.902.0↗︎ (from^3.896.0, independencies) - Updated dependency
@aws-sdk/lib-dynamodb@^3.902.0↗︎ (from^3.896.0, independencies) (#8436)
- Updated dependency
@mastra/inngest
@mastra/langsmith
- dependencies updates:
- Updated dependency
langsmith@>=0.3.72↗︎ (from>=0.3.71, independencies) (#8560)
- Updated dependency
@mastra/longmemeval
- Mutable shared workflow run state (#8545)
@mastra/mcp
- Mutable shared workflow run state (#8545)
@mastra/mcp-docs-server
- Mutable shared workflow run state (#8545)
@mastra/memory
- Ensure working memory can be updated througb createThread and updateThread (#8513)
- Fix TypeScript errors with provide...
2025-10-03
Mastra Release - 2025-10-03
This release includes improvements to documentation, playground functionality, API naming conventions, and various bug fixes across the platform.
Agents
- Reorganizes the agent memory documentation by explaining async memory configuration, introducing runtime context, and moving detailed content to the appropriate Memory section. #8410
CLI / Playground
- Fixes a bug where the shell option was breaking server startup on Windows environments. #8377
- Adds a dedicated authentication token specifically for the Playground environment. #8420
- Fixes an issue in the playground UI by properly initializing message history for v1 models, ensuring history renders correctly when refreshing a thread. #8427
Client SDK - JS
- Fixes a race condition in the client-js library by ensuring that WritableStream operations await the completion of ongoing pipeTo() calls, preventing locked stream errors and production crashes. #8346
- Adds GenerateVNext support to the React SDK and introduces a function to convert UIMessages to assistant-ui messages. #8345
- Fixes issues with the custom AI SDK output. #8414
Core Platform Components
- [IMPORTANT] Updates API and SDK naming by renaming 'generateVNext' to 'generate' and 'streamVNext' to 'stream', moving previous versions to 'generateLegacy' and 'streamLegacy', and updates all related code, documentation, and examples for consistency and backwards compatibility. #8097
- [TIER 2] Improves structured output handling by converting it from an output processor to an EventEmitter-based stream processor, enabling multiple consumers and direct streaming of structured agent output, while also removing legacy structuredOutput usage. #8229
Deployer
- [TIER 2] Adds support for per-function configuration overrides (maxDuration, memory, regions) in the Vercel deployer via a centralized vcConfigOverrides option, merges these into the generated .vc-config.json, extracts config types for clarity, and updates code style, all while maintaining backward compatibility. #8339
- Adds support for resolving transitive dependencies in monorepos during development in the deployer. #8353
Developer Tools & UI
- Fixes an issue where working memory and semantic recall were not being displayed in the UI. #8358
- Improves the color contrast for code blocks in legacy traces to improve readability. #8385
- Updates the thread display by showing messages in descending order and includes the thread title. #8381
- Fixes issues with model router documentation generation and the playground UI's model picker, including logic errors, copy improvements, UI bugs, environment variable display, and adds responsive design for better mobile support. #8372
MCP
- Updates MCPServer prompts and resource callbacks to access the 'extra' property, including AuthInfo, allowing for authenticated or personalized server interactions. #8233
Memory
- Improves the memory indicator UX by replacing the previous small indicator with a shared Alert component, now displayed on the agent sidebar. #8382
- Fixes the persistence of output processor state across LLM execution steps, ensuring processors retain their state and structured output is generated correctly, while also updating controller references and preventing premature 'finish' chunk processing. #8373
Networks
- [TIER 2] Migrates agent network functionality to the new streamlined agent API, removing the separate vNext network implementation from the playground. #8329
Observability
- Enables observability by default for all templates. #8380
Prod analytics
- Adds a 3-second fetch interval to AI traces, making the UI and trace details update more responsively in real time. #8386
Tools
- [TIER 2] Adds human-in-the-loop capabilities with tool call approval, allowing users to review and approve/decline tool executions before they run. #8360
Workflows
- Fixes a bug where the resourceId was lost when resuming workflows after a server restart by ensuring it is correctly passed through all relevant server handlers and the core workflow's resume logic. #8359
- [TIER 2] Adds the ability to resume and observe interrupted workflow streams in the playground, allowing users to continue streaming results after a workflow is suspended or the frontend stream is closed. #8318
2025-10-01
Mastra Release - 2025-10-01
We are excited to announce the release of our new model router and model fallbacks! You can now choose any model provider and model without the need to install or import it. If one model is not functioning properly, you will automatically be able to fallback to another model.
Agents
- Updates step identification by including description and component key when steps are created from agents or tools, and updates related tests. #8151
- Improves the trace scoring logic to eliminate duplication in building agent payloads and updates related tests. #8280
- Enables the agent to return the selection reason as the result when it cannot route and pick a primitive. #8308
- Fixes TypeScript type inference issues by making the 'suspend' property optional in ToolExecutionContext and resolves module resolution conflicts for DynamicArgument, improving tool usability and type safety. #8305
- Updates the routing agent to throw an error if the required memory parameter is not provided. #8313
- Fixes a race condition in the stream pipeline by adding controller state checks before enqueueing data, preventing errors from stale callbacks attempting to write to a closed ReadableStreamDefaultController during sequential agent tests. #8186
CLI / Playground
- Removes the legacy workflow from both the playground and client-js components. #8017
- Adds model fallback functionality to the playground as a new feature. #7427 [IMPORTANT]
- Improves the playground UI's stream handling by ensuring correct part ordering and robustness when processing streamed assistant messages, including adding empty messages and text parts as needed and properly handling content types. #8234
- Updates the playground to display which model from the fallback list was successfully used. #8167
- Updates the playground's model picker to use the new model router, adding provider connection status indicators, info links to provider docs, improved warning and error handling, and preserving drag-and-drop reordering for multi-model setups. #8332 [IMPORTANT]
- Fixes the network label when memory is not enabled or when the agent has no subagents, addressing a UI bug. #8341
Client SDK - JS
- Fixes a bug in the client-js SDK that caused duplicate storage of the initial user message when client-side tools were executed, ensuring only assistant responses and tool results are sent back to the server instead of resending the original user message. #8187
- Moves the useMastraClient hook and its provider to the React package. #8203
- Fixes missing traceId output when using the aisdk format and disables tracing in input/output processors to prevent unwanted traces. #8263
- Updates the React SDK to convert network chunks into UIMessage objects, enhancing how network data is handled in the UI. #8304
Deployer
- Fixes a bug where a randomUUID call was leaking into the global scope of the Cloudflare worker during the mastra build process by preventing it from being called at import time. #8105
- Adds an environment variable, MASTRA_HIDE_CLOUD_CTA, to allow hiding the Mastra cloud deploy button, requiring a full server restart to take effect. #8137
- Fixes build and unit test issues in the Cloudflare deployer, adds end-to-end tests, and resolves a compatibility bug with @mastra/pg. #8163
- Fixes issues with native dependencies in bun monorepos for the deployer by correcting the bun pack process. #8201
- Updates Mastra Cloud to version 0.1.15, including the removal of a custom header from the JavaScript client, a workflow fix for JSON issues, and an AI SDK dependency update. #8134
- Improves the installation process for indirect external dependencies in the Mastra build system. #8145
- Fixes workspace path handling on Windows by introducing a utility to normalize path separators, resolving issues with workspace detection and comparisons. #7943
- Adds support for Netlify Gateway by introducing a NetlifyGateway class, updating gateway and OpenAI-compatible classes for dynamic URLs and token generation, and enhancing model ID resolution. #8331 [IMPORTANT]
Evals
- Adds conditional chaining to scorer.agentNames to prevent errors when accessing potentially undefined properties. #8199
- Updates the score types to allow input and output to be any type. #8153
- Adds server APIs to retrieve all scores by trace and span ID, along with corresponding test updates. #8237
MCP
- Updates the generateVNext function to correctly return a stream response for the tripwire case and removes unnecessary special handling for tripwire responses. #8122
- Adds comprehensive and accurate TypeScript types to the streamVNext code path, aligns return types, removes dead code, and improves overall type safety and code clarity. #8010
- Updates core error processing to safely parse error objects using safeParse. #8312
- Adds a model router system to @mastra/core, allowing users to specify any OpenAI-compatible provider and model using simple magic strings (e.g., 'openai/gpt-4o'), with automatic provider config resolution and support for custom providers, streamlining model selection and integration. #8235 [IMPORTANT]
Networks
- Fixes agent networking failures with working memory by correctly passing and fetching memory tools, updates prompts to improve agent tool usage, and renames resourceId/resourceType to primitiveId/primitiveType for clarity. #8157
- Fixes issues with the network chunk type and updates the working memory test. #8210
- Improves error processing by ensuring that useful error information in stream error objects is no longer masked. #8270
Observability
- Adds a new LangSmith observability package, adapted from the Braintrust implementation, to enhance monitoring and tracing capabilities. #8160 [TIER 2]
- Introduces the initial release of an OpenTelemetry exporter to enable AI observability. #8273 [TIER 2]
Storage
- Adds optional SSL support to PostgreSQL connection string configuration, improves type safety in config validation, and enhances error handling for invalid configurations. #8178
- Adds a separate spanId column to the scores table to enable fetching scores by both traceId and spanId, instead of storing both under the traceId column. #8154
- Adds full AI tracing support to the PostgreSQL storage adapter, updates SpanType to use strings instead of integers across all stores, and ensures timestamps are set by the database rather than the exporter. #8027
Tools
- Updates type definitions to use structural typing for Zod schemas, ensuring compatibility with both Zod v3 and v4 in tool schemas, and adds CI tests to prevent future regressions. #8150
- Fixes a bug by ensuring that ToolInvocationOptions are correctly used in the execute parameters of createTool. #8206
- Fixes a bug where streaming would hang for client-side tools that were not self-executing by ensuring the stream properly closes when no client tools are run. #8272
Voice
- Updates the speak method in openai-voice to destructure the options parameter to extract key properties while allowing additional OpenAI voice parameters to be passed through, improving flexibility for callers. #8228
- Adds support for multiple Google Voice authentication methods—including API keys, service account key files, and inline credentials—enables use of Application Default Credentials (ADC), and updates the documentation accordingly. #8086
Workflows
- Adds support for passing register options to the Inngest serve function, enabling users to configure additional registration settings when serving Mastra workflows. #8139
- Adds optional Zod-based input and resume data validation to workflow steps, controlled by a new valid...