Skip to content

Conversation

Copy link

Copilot AI commented Nov 18, 2025

Thanks for asking me to work on this. I will get started on it and keep this PR's description up to date as I form a plan and make progress.

Original prompt

Background:

When filesystem tools (write_file/edit_file and the large-result eviction flow in wrapToolCall) return partial file updates, the middleware currently constructs the Command.update.files payload using only the files updates produced in the current command messages. This causes the new state.files to only contain the last-updated file(s) and drop previously-existing files from state, because the existing state is not merged with the new partial updates before the Command is returned.

Root cause:

In createFilesystemMiddleware's wrapToolCall handler, the code collects processed files updates into an accumulatedFiles object initialized from update.files (which is typically empty for a new command). It then Object.assigns processed.filesUpdate entries into accumulatedFiles. That means only the processed updates in the single tool call (or the current Command) are present in the returned Command.update.files. The state reducer (fileDataReducer) is registered in the state schema to merge updates, but LangGraph will apply the reducer only when it receives the Command.update.files diff together with the previous state; however, the current Command.update.files doesn't include the previous state's files, so the reducer sees only the partial update. The correct approach is to merge partial updates with the currently-known state.files (request.state.files) as we accumulate them, or to run the same reducer to merge the existing state and partial updates before returning the Command so the returned Command.update.files contains the proper merged result.

Proposed fix (changes to src/middleware/fs.ts):

  1. When accumulating file updates in wrapToolCall, initialize accumulatedFiles from the request's current state.files (if any), not only from update.files.

  2. Use the existing fileDataReducer to merge the accumulatedFiles (the current merged state) with each processed.filesUpdate as they are produced. This ensures deletions (null) and replacements are handled consistently.

  3. Keep the rest of the behaviour unchanged: when there are large tool results, write evicted content to the backend and return a Command with update.files containing the merged files map and messages with truncated ToolMessage(s).

Concrete code change (high-level patch):

  • Replace the current accumulation logic inside the isCommand branch of wrapToolCall with code similar to:

    // build initial accumulatedFiles from request.state.files (or {}) and then merge update.files (if present)
    let accumulatedFiles: Record<string, FileData> = request.state?.files || {};
    if (update.files) {
    accumulatedFiles = fileDataReducer(accumulatedFiles, update.files);
    }

    for each message -> if processed.filesUpdate:
    accumulatedFiles = fileDataReducer(accumulatedFiles, processed.filesUpdate);

    // if hasLargeResults, return new Command with update.files = accumulatedFiles and messages = processedMessages

Notes:

  • This change uses the existing fileDataReducer exported in the same file so we keep deletion semantics and any other merging logic centralized.
  • No change to public tool APIs.
  • A unit or integration test would assert that when multiple tool messages return filesUpdate entries across the same middleware invocation, the final Command.update.files contains the merged result and that previously-existing files from request.state.files are preserved.

Please fork the repository into my organization (botbusiness) and open a pull request from the fork with this fix because I (janpawellek) do not have write permissions to the original repository. The PR should contain the code change to src/middleware/fs.ts described above, and an English PR description explaining the bug, root cause, and fix. Commit and branch naming: 'fix/fs-merge-files-update'.

If you need any clarification about which branch to target or additional tests to add, tell me; otherwise proceed to fork, create the branch, apply the patch, and open the PR against langchain-ai/deepagentsjs:main using the fork in botbusiness.

This pull request was created as a result of the following prompt from Copilot chat.

Background:

When filesystem tools (write_file/edit_file and the large-result eviction flow in wrapToolCall) return partial file updates, the middleware currently constructs the Command.update.files payload using only the files updates produced in the current command messages. This causes the new state.files to only contain the last-updated file(s) and drop previously-existing files from state, because the existing state is not merged with the new partial updates before the Command is returned.

Root cause:

In createFilesystemMiddleware's wrapToolCall handler, the code collects processed files updates into an accumulatedFiles object initialized from update.files (which is typically empty for a new command). It then Object.assigns processed.filesUpdate entries into accumulatedFiles. That means only the processed updates in the single tool call (or the current Command) are present in the returned Command.update.files. The state reducer (fileDataReducer) is registered in the state schema to merge updates, but LangGraph will apply the reducer only when it receives the Command.update.files diff together with the previous state; however, the current Command.update.files doesn't include the previous state's files, so the reducer sees only the partial update. The correct approach is to merge partial updates with the currently-known state.files (request.state.files) as we accumulate them, or to run the same reducer to merge the existing state and partial updates before returning the Command so the returned Command.update.files contains the proper merged result.

Proposed fix (changes to src/middleware/fs.ts):

  1. When accumulating file updates in wrapToolCall, initialize accumulatedFiles from the request's current state.files (if any), not only from update.files.

  2. Use the existing fileDataReducer to merge the accumulatedFiles (the current merged state) with each processed.filesUpdate as they are produced. This ensures deletions (null) and replacements are handled consistently.

  3. Keep the rest of the behaviour unchanged: when there are large tool results, write evicted content to the backend and return a Command with update.files containing the merged files map and messages with truncated ToolMessage(s).

Concrete code change (high-level patch):

  • Replace the current accumulation logic inside the isCommand branch of wrapToolCall with code similar to:

    // build initial accumulatedFiles from request.state.files (or {}) and then merge update.files (if present)
    let accumulatedFiles: Record<string, FileData> = request.state?.files || {};
    if (update.files) {
    accumulatedFiles = fileDataReducer(accumulatedFiles, update.files);
    }

    for each message -> if processed.filesUpdate:
    accumulatedFiles = fileDataReducer(accumulatedFiles, processed.filesUpdate);

    // if hasLargeResults, return new Command with update.files = accumulatedFiles and messages = processedMessages

Notes:

  • This change uses the existing fileDataReducer exported in the same file so we keep deletion semantics and any other merging logic centralized.
  • No change to public tool APIs.
  • A unit or integration test would assert that when multiple tool messages return filesUpdate entries across the same middleware invocation, the final Command.update.files contains the merged result and that previously-existing files from request.state.files are preserved.

Please fork the repository into my organization (botbusiness) and open a pull request from the fork with this fix because I (janpawellek) do not have write permissions to the original repository. The PR should contain the code change to src/middleware/fs.ts described above, and an English PR description explaining the bug, root cause, and fix. Commit and branch naming: 'fix/fs-merge-files-update'.

If you need any clarification about which branch to target or additional tests to add, tell me; otherwise proceed to fork, create the branch, apply the patch, and open the PR against langchain-ai/deepagentsjs:main using the fork in botbusiness.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI self-assigned this Nov 18, 2025
Copilot stopped work on behalf of janpawellek due to an error November 18, 2025 12:13
@janpawellek
Copy link

Sorry, I tried to create this PR using Copilot, but it failed due to insufficient permissions. Now I can't even modify or close it.

I manually created a PR at #63 instead.
Please close this PR in favor of #63. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants