Skip to content

Conversation

@autologie
Copy link
Contributor

Summary

This PR improves chat sidebar performance by introducing pagination of the conversation list and optimizing the UI logic for rendering to reduce dependent API responses. Also adds skeleton to provide clear feedback to users while loading.

Related Linear tickets, Github issues, and Community forum posts

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)

@bundlemon
Copy link

bundlemon bot commented Nov 19, 2025

BundleMon

Unchanged files (2)
Status Path Size Limits
WASM Dependencies
tree-sitter-bash.wasm
181.26KB -
WASM Dependencies
tree-sitter.wasm
74.47KB -

No change in files bundle size

Groups updated (2)
Status Path Size Limits
**/*.js
11.34MB (+47.8KB +0.41%) -
**/*.css
227.69KB (+6.91KB +3.13%) -

Final result: ✅

View report in BundleMon website ➡️


Current branch size history | Target branch size history

async getConversations(
req: AuthenticatedRequest,
_res: Response,
@Query query: ChatHubConversationsRequest,
Copy link
Contributor Author

@autologie autologie Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conversation list endpoint is fast enough without pagination, but it does slow down rendering UI when many items are returned. This is still the case if the user scrolls the list very far, but I think we can keep it for later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this seems good enough for now I think 👍

@autologie autologie requested a review from Cadiac November 19, 2025 12:54
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 issues found across 15 files

Prompt for AI agents (all 5 issues)

Understand the root cause of the following 5 issues and fix them.


<file name="packages/cli/src/modules/chat-hub/chat-hub.service.ts">

<violation number="1" location="packages/cli/src/modules/chat-hub/chat-hub.service.ts:1514">
Rule violated: **Tests**

Cursor-based pagination (`nextCursor`/`hasMore`) was added to `getConversations` without any tests covering the new limit/cursor invariants, leaving this core listing logic unverified despite the Community PR Guideline requiring tests for new functionality.</violation>
</file>

<file name="packages/cli/src/modules/chat-hub/chat-session.repository.ts">

<violation number="1" location="packages/cli/src/modules/chat-hub/chat-session.repository.ts:61">
Rule violated: **Tests**

Community PR Guidelines §2 require tests for core functionality. The new paginated/cursor-aware `getManyByUserId` implementation adds limit handling and a NotFound error path but the chat-hub test suite contains no coverage for cursor pagination or the new error case. Please add unit or integration tests that verify limiting, cursor-based continuation, and the missing-cursor error.</violation>

<violation number="2" location="packages/cli/src/modules/chat-hub/chat-session.repository.ts:76">
Cursor pagination can return duplicate items: the filter uses `lastMessageAt &lt;= cursor` with `id != cursor`, so records with the same timestamp but smaller IDs (already seen in a prior page) satisfy the cursor clause and get re-sent. Use a lexicographical comparison matching the sort order to advance strictly past the cursor.</violation>
</file>

<file name="packages/frontend/editor-ui/src/features/ai/chatHub/chat.store.ts">

<violation number="1" location="packages/frontend/editor-ui/src/features/ai/chatHub/chat.store.ts:282">
`sessionsLoadingMore` is set to true before the API call but never reset when `fetchSessionsApi` rejects, so a failed request permanently blocks further pagination. Wrap the call in try/finally to clear the flag on both success and error.</violation>

<violation number="2" location="packages/frontend/editor-ui/src/features/ai/chatHub/chat.store.ts:333">
Calling `.some` on `sessions.value?.data` will throw if `sessions.value` is still undefined; use optional chaining on `data` (or default to an empty array) so the check doesn’t crash when the session list hasn’t loaded yet.</violation>
</file>

Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR

@n8n-assistant n8n-assistant bot added core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team labels Nov 19, 2025
@blacksmith-sh

This comment has been minimized.

.addOrderBy('session.id', 'ASC');

if (cursor) {
const cursorSession = await this.findOne({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of just ID as the cursor we could also make the endpoint receive lastMessageAt from the FE, skipping the need for this query 🤔

but I think we don't need to improve this further, seems fast enough & this ensures the session belongs to the user too which is nice

Copy link
Contributor

@Cadiac Cadiac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still might need to revisit these performance improvements to skip loading all the credentials which is slow on internal (with thousands of credentials available). but this seems like an improvement to the initial draw speed anyways 👍

I was thinking it would be nice if we could only load credentials of relevant types. We could also change the model selector to load the providers only one by one when it is opened / providers are expanded, instead of loading all models from all providers like we do now 🤔

@Cadiac
Copy link
Contributor

Cadiac commented Nov 19, 2025

Actually it looks like one of the tests is actually failing on postgres, you might want to take a look on that 🤔

@blacksmith-sh

This comment has been minimized.

@autologie
Copy link
Contributor Author

We still might need to revisit these performance improvements to skip loading all the credentials which is slow on internal (with thousands of credentials available). but this seems like an improvement to the initial draw speed anyways 👍

I was thinking it would be nice if we could only load credentials of relevant types. We could also change the model selector to load the providers only one by one when it is opened / providers are expanded, instead of loading all models from all providers like we do now 🤔

yes, the main chat pane is still slow. Fetching only relevant models/credentials sounds a good idea.

@autologie autologie requested a review from Cadiac November 20, 2025 13:36
@currents-bot
Copy link

currents-bot bot commented Nov 20, 2025

E2E Tests: n8n tests passed after 11m 58.6s

🟢 583 · 🔴 0 · ⚪️ 12 · 🟣 6

View Run Details

Run Details

  • Project: n8n

  • Groups: 2

  • Framework: Playwright

  • Run Status: Passed

  • Commit: d87b1d4

  • Spec files: 95

  • Overall tests: 595

  • Duration: 11m 58.6s

  • Parallelization: 8

Groups

GroupId Results Spec Files Progress
ui 🟢 528 · 🔴 0 · ⚪️ 12 · 🟣 6 88 / 88
ui:isolated 🟢 55 · 🔴 0 · ⚪️ 0 7 / 7


This message was posted automatically by currents.dev | Integration Settings

@autologie autologie merged commit 11e111e into master Nov 20, 2025
33 checks passed
@autologie autologie deleted the conversation-list-pagination branch November 20, 2025 13:55
@n8n-assistant
Copy link

n8n-assistant bot commented Nov 25, 2025

Got released with [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team Released

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants