-
Notifications
You must be signed in to change notification settings - Fork 331
feat: add pagination and continuation-token support to ListObjects API #2829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit adds server-side support for resumable pagination in the ListObjects API to address the issue where partial results are silently returned when internal deadlines are exceeded. Changes: - Add encoder field to ListObjectsQuery for token encoding/decoding - Add listObjectsToken struct to track previously returned objects - Add gzip compression for tokens (82% size reduction for 100 objects) - Add ContinuationToken field to ListObjectsResponse - Generate continuation token when deadline exceeded OR max results hit - Skip duplicate objects when resuming with a continuation token - Add comprehensive unit tests for token handling The token contains all previously returned object IDs (compressed) to ensure no duplicates are returned across paginated requests. This approach guarantees correctness while keeping token sizes manageable through gzip compression. Note: Full integration requires proto changes in openfga/api repo to add continuation_token field to ListObjectsRequest/Response. Relates to: openfga#2828
|
|
WalkthroughIntroduced cursor-based pagination to ListObjects API via continuation tokens. Tokens capture previously returned object IDs, compressed via gzip, and encoded using a pluggable encoder. Pagination logic deduplicates results across page boundaries and generates tokens when operations may be incomplete. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Server
participant Encoder
Client->>Server: ListObjects(query, continuation_token?)
alt Token provided
Server->>Encoder: Decode & decompress token
Encoder-->>Server: Previously returned IDs
Server->>Server: Skip already-returned objects
end
Server->>Server: Collect results until deadline or limit
alt More results available
Server->>Server: Merge previously returned IDs with current results
Server->>Server: Marshal & compress token
Server->>Encoder: Encode token
Encoder-->>Server: Encoded continuation token
Server-->>Client: Response(objects, continuation_token)
else All results collected
Server-->>Client: Response(objects, empty_token)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
pkg/server/commands/list_objects_test.go (2)
1135-1223: Consider adding test cases for malformed/corrupted tokens.The token encoding tests cover the happy path well. Consider adding negative test cases:
- Malformed base64 input
- Valid base64 but invalid gzip data
- Valid gzip but invalid JSON structure
This would ensure
decompressTokenand JSON unmarshaling errors are properly handled and returned asErrInvalidContinuationToken.Example test case to add:
t.Run("invalid_compressed_data", func(t *testing.T) { checkResolver := graph.NewLocalChecker() q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID) require.NoError(t, err) // Valid base64 but not valid gzip invalidData := []byte("not gzip data") encoded, err := q.encoder.Encode(invalidData) require.NoError(t, err) decoded, err := q.encoder.Decode(encoded) require.NoError(t, err) _, err = decompressToken(decoded) require.Error(t, err) })
1242-1262: Add functional tests for pagination behavior in Execute.These tests verify the struct fields exist but don't exercise the actual pagination logic in
Execute. Consider adding tests that:
- Verify a continuation token is generated when
maxResultsis reached- Verify a continuation token is generated on deadline exceeded
- Verify previously returned objects are skipped when resuming with a token
Once the proto is updated to accept
continuation_tokenin the request, these functional tests will be essential for validating end-to-end behavior.Would you like me to generate skeleton tests for these scenarios that can be completed once the proto changes land?
pkg/server/commands/list_objects.go (3)
122-126: Token stores returned IDs but not traversal state.The current design re-traverses the graph on each paginated request and filters duplicates. Per the linked issue #2828, the ideal design would "track returned object IDs and traversal state" to avoid redundant work.
This approach is valid for an initial implementation, but be aware that for large graphs:
- Token size grows linearly with results (though gzip helps)
- CPU/IO cost is repeated on each page
Consider documenting this as a known limitation or future optimization opportunity.
752-757: Duplicate skip logic across code paths.The skip logic at lines 674-679 and 752-757 is identical. Consider extracting to a helper function to reduce duplication and ensure consistent behavior.
func shouldSkipObject(objectID string, previouslyReturned map[string]struct{}) bool { if previouslyReturned == nil { return false } _, seen := previouslyReturned[objectID] return seen }
766-789: Duplicate token generation logic; same error handling concern.This token generation code is duplicated from lines 693-716 (pipeline path). The same silent error swallowing issue applies here.
Consider extracting to a shared helper:
func (q *ListObjectsQuery) generateContinuationToken( objects []string, previouslyReturned map[string]struct{}, ) (string, error) { allObjects := make([]string, 0, len(objects)+len(previouslyReturned)) allObjects = append(allObjects, objects...) for obj := range previouslyReturned { allObjects = append(allObjects, obj) } token := listObjectsToken{Objects: allObjects} tokenBytes, err := json.Marshal(token) if err != nil { return "", err } compressed, err := compressToken(tokenBytes) if err != nil { return "", err } return q.encoder.Encode(compressed) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/server/commands/list_objects.go(10 hunks)pkg/server/commands/list_objects_test.go(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-21T19:29:57.962Z
Learnt from: Vic-Dev
Repo: openfga/openfga PR: 2749
File: pkg/server/commands/list_objects.go:557-571
Timestamp: 2025-10-21T19:29:57.962Z
Learning: In the experimental pipeline path for ListObjects (pkg/server/commands/list_objects.go), when `wgraph != nil && q.pipelineEnabled` is true, the implementation intentionally does not enforce `q.listObjectsMaxResults` and returns all results found.
Applied to files:
pkg/server/commands/list_objects.go
🧬 Code graph analysis (1)
pkg/server/commands/list_objects.go (3)
pkg/encoder/encoder.go (1)
Encoder(9-12)pkg/encoder/base64.go (1)
NewBase64Encoder(16-18)pkg/server/errors/errors.go (1)
ErrInvalidContinuationToken(24-24)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL analysis (go)
- GitHub Check: Agent
🔇 Additional comments (3)
pkg/server/commands/list_objects_test.go (1)
1225-1240: LGTM!The test correctly verifies that the custom encoder option is applied to the query.
pkg/server/commands/list_objects.go (2)
243-248: LGTM!The encoder option follows the established pattern in the codebase and the default Base64Encoder aligns with existing pagination implementations like
read.go.
584-607: Placeholder acknowledged; verify integration once proto is updated.The token decoding logic is well-structured and handles errors appropriately by returning
ErrInvalidContinuationToken. Once openfga/api PR #238 lands, remember to:
- Replace
continuationToken := ""withreq.GetContinuationToken()(or equivalent)- Add integration tests exercising the full pagination flow
| // decompressToken decompresses a gzip-compressed token. | ||
| func decompressToken(data []byte) ([]byte, error) { | ||
| reader, err := gzip.NewReader(bytes.NewReader(data)) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| defer reader.Close() | ||
| return io.ReadAll(reader) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check error from gzip reader Close().
gzip.Reader.Close() can return an error if the gzip checksum validation fails, which would indicate data corruption. The deferred Close() ignores this error.
func decompressToken(data []byte) ([]byte, error) {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, err
}
- defer reader.Close()
- return io.ReadAll(reader)
+ decompressed, err := io.ReadAll(reader)
+ if err != nil {
+ reader.Close()
+ return nil, err
+ }
+ if err := reader.Close(); err != nil {
+ return nil, err
+ }
+ return decompressed, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // decompressToken decompresses a gzip-compressed token. | |
| func decompressToken(data []byte) ([]byte, error) { | |
| reader, err := gzip.NewReader(bytes.NewReader(data)) | |
| if err != nil { | |
| return nil, err | |
| } | |
| defer reader.Close() | |
| return io.ReadAll(reader) | |
| } | |
| // decompressToken decompresses a gzip-compressed token. | |
| func decompressToken(data []byte) ([]byte, error) { | |
| reader, err := gzip.NewReader(bytes.NewReader(data)) | |
| if err != nil { | |
| return nil, err | |
| } | |
| decompressed, err := io.ReadAll(reader) | |
| if err != nil { | |
| reader.Close() | |
| return nil, err | |
| } | |
| if err := reader.Close(); err != nil { | |
| return nil, err | |
| } | |
| return decompressed, nil | |
| } |
🤖 Prompt for AI Agents
In pkg/server/commands/list_objects.go around lines 142 to 150, the deferred
gzip.Reader.Close() ignores errors (checksum/data-corruption failures); change
the function to not defer Close but instead call reader.Close() after
io.ReadAll, capture both read and close errors, and return the read error if
present otherwise return the close error (or nil if neither); ensure the reader
is still closed if ReadAll returns an error by calling Close() in a final step
before returning.
| // Generate continuation token if results may be incomplete | ||
| // This happens when: deadline exceeded OR max results limit reached | ||
| deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded | ||
| hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults | ||
| if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 { | ||
| // Combine previously returned objects with newly returned objects | ||
| // This ensures no duplicates across paginated requests | ||
| allObjects := res.Objects | ||
| for obj := range previouslyReturned { | ||
| allObjects = append(allObjects, obj) | ||
| } | ||
| token := listObjectsToken{Objects: allObjects} | ||
| tokenBytes, err := json.Marshal(token) | ||
| if err == nil { | ||
| // Compress token to reduce size (object IDs are highly compressible) | ||
| compressed, err := compressToken(tokenBytes) | ||
| if err == nil { | ||
| encoded, err := q.encoder.Encode(compressed) | ||
| if err == nil { | ||
| res.ContinuationToken = encoded | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent error swallowing during token generation could cause data loss.
If json.Marshal, compressToken, or encoder.Encode fails, the error is silently ignored and the response is sent without a continuation token. The client will assume results are complete when they're not, causing silent data truncation.
Consider returning an error or logging a warning:
if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 {
allObjects := res.Objects
for obj := range previouslyReturned {
allObjects = append(allObjects, obj)
}
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
- if err == nil {
- compressed, err := compressToken(tokenBytes)
- if err == nil {
- encoded, err := q.encoder.Encode(compressed)
- if err == nil {
- res.ContinuationToken = encoded
- }
- }
- }
+ if err != nil {
+ return nil, serverErrors.HandleError("failed to marshal continuation token", err)
+ }
+ compressed, err := compressToken(tokenBytes)
+ if err != nil {
+ return nil, serverErrors.HandleError("failed to compress continuation token", err)
+ }
+ encoded, err := q.encoder.Encode(compressed)
+ if err != nil {
+ return nil, serverErrors.HandleError("failed to encode continuation token", err)
+ }
+ res.ContinuationToken = encoded
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Generate continuation token if results may be incomplete | |
| // This happens when: deadline exceeded OR max results limit reached | |
| deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded | |
| hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults | |
| if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 { | |
| // Combine previously returned objects with newly returned objects | |
| // This ensures no duplicates across paginated requests | |
| allObjects := res.Objects | |
| for obj := range previouslyReturned { | |
| allObjects = append(allObjects, obj) | |
| } | |
| token := listObjectsToken{Objects: allObjects} | |
| tokenBytes, err := json.Marshal(token) | |
| if err == nil { | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err == nil { | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err == nil { | |
| res.ContinuationToken = encoded | |
| } | |
| } | |
| } | |
| } | |
| // Generate continuation token if results may be incomplete | |
| // This happens when: deadline exceeded OR max results limit reached | |
| deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded | |
| hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults | |
| if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 { | |
| // Combine previously returned objects with newly returned objects | |
| // This ensures no duplicates across paginated requests | |
| allObjects := res.Objects | |
| for obj := range previouslyReturned { | |
| allObjects = append(allObjects, obj) | |
| } | |
| token := listObjectsToken{Objects: allObjects} | |
| tokenBytes, err := json.Marshal(token) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("failed to marshal continuation token", err) | |
| } | |
| compressed, err := compressToken(tokenBytes) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("failed to compress continuation token", err) | |
| } | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("failed to encode continuation token", err) | |
| } | |
| res.ContinuationToken = encoded | |
| } |
🤖 Prompt for AI Agents
In pkg/server/commands/list_objects.go around lines 693-716, the code currently
swallows errors from json.Marshal, compressToken, and encoder.Encode when
building the continuation token; change this so failures are not silent by
checking each err and returning a wrapped/internal error (or at minimum logging
a clear warning) instead of ignoring it — on Marshal/compress/Encode failure log
details (object count, bucket, prefix) and return an error from the handler so
the client isn’t misled into thinking results are complete; ensure callers
handle the returned error and add/update tests to cover token-generation
failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds continuation token-based pagination support to the ListObjects API to address the issue where clients cannot detect incomplete results or resume fetching when server deadlines or max result limits are reached. The implementation uses gzip-compressed, base64-encoded tokens containing previously returned object IDs to enable resumable pagination.
- Adds pagination infrastructure with token encoding/decoding and gzip compression
- Implements token generation when deadline is exceeded or max results are reached
- Adds duplicate detection logic to skip previously returned objects on pagination resume
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
pkg/server/commands/list_objects.go |
Adds listObjectsToken struct, compression helpers (compressToken/decompressToken), encoder field to ListObjectsQuery, token generation logic in both code paths, and duplicate-skipping logic |
pkg/server/commands/list_objects_test.go |
Adds unit tests for token encoding/decoding, compression ratio validation, encoder option testing, and response struct field verification |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| allObjects := res.Objects | ||
| for obj := range previouslyReturned { |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic incorrectly combines objects. allObjects is assigned res.Objects (the newly returned objects), then the previously returned objects are appended to it. This means the token will contain newly returned objects first, followed by previously returned objects. However, the correct behavior should be to store ALL objects that have been returned across all requests. The current implementation will cause allObjects to grow unboundedly across pagination requests, as it keeps accumulating objects. The token should only contain the complete set of unique objects seen so far, not duplicates.
| allObjects := res.Objects | |
| for obj := range previouslyReturned { | |
| // Build a set of unique objects from both res.Objects and previouslyReturned | |
| uniqueObjects := make(map[string]struct{}) | |
| for _, obj := range res.Objects { | |
| uniqueObjects[obj] = struct{}{} | |
| } | |
| for obj := range previouslyReturned { | |
| uniqueObjects[obj] = struct{}{} | |
| } | |
| allObjects := make([]string, 0, len(uniqueObjects)) | |
| for obj := range uniqueObjects { |
| // Combine previously returned objects with newly returned objects | ||
| // This ensures no duplicates across paginated requests | ||
| allObjects := listObjectsResponse.Objects | ||
| for obj := range previouslyReturned { |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic incorrectly combines objects. allObjects is assigned listObjectsResponse.Objects (the newly returned objects), then the previously returned objects are appended to it. This means the token will contain newly returned objects first, followed by previously returned objects. However, the correct behavior should be to store ALL objects that have been returned across all requests. The current implementation will cause allObjects to grow unboundedly across pagination requests, as it keeps accumulating objects. The token should only contain the complete set of unique objects seen so far, not duplicates.
| // Combine previously returned objects with newly returned objects | |
| // This ensures no duplicates across paginated requests | |
| allObjects := listObjectsResponse.Objects | |
| for obj := range previouslyReturned { | |
| // Combine previously returned objects with newly returned objects, ensuring uniqueness | |
| uniqueObjects := make(map[string]struct{}) | |
| // Add previously returned objects | |
| for obj := range previouslyReturned { | |
| uniqueObjects[obj] = struct{}{} | |
| } | |
| // Add newly returned objects | |
| for _, obj := range listObjectsResponse.Objects { | |
| uniqueObjects[obj] = struct{}{} | |
| } | |
| // Convert map keys to slice | |
| allObjects := make([]string, 0, len(uniqueObjects)) | |
| for obj := range uniqueObjects { |
| if err == nil { | ||
| // Compress token to reduce size (object IDs are highly compressible) | ||
| compressed, err := compressToken(tokenBytes) | ||
| if err == nil { | ||
| encoded, err := q.encoder.Encode(compressed) | ||
| if err == nil { | ||
| listObjectsResponse.ContinuationToken = encoded | ||
| } | ||
| } | ||
| } |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Errors during token generation are silently ignored here. This is inconsistent with other commands like list_stores.go:63-65 where encoding errors return serverErrors.HandleError("", err). Silent failure could mislead clients into thinking all results were returned when pagination actually failed. Consider handling errors consistently by returning them to the client or at minimum logging them using q.logger.Error().
| if err == nil { | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err == nil { | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err == nil { | |
| listObjectsResponse.ContinuationToken = encoded | |
| } | |
| } | |
| } | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| listObjectsResponse.ContinuationToken = encoded |
| // Decode continuation token if provided (for pagination support) | ||
| // TODO: Get continuation token from request once proto is updated | ||
| // For now, this prepares the server-side logic | ||
| var previouslyReturned map[string]struct{} | ||
| continuationToken := "" // Will be populated from req once proto is updated | ||
| if continuationToken != "" { | ||
| decodedToken, err := q.encoder.Decode(continuationToken) | ||
| if err != nil { | ||
| return nil, serverErrors.ErrInvalidContinuationToken | ||
| } | ||
| // Decompress the token (tokens are gzip compressed to reduce size) | ||
| decompressed, err := decompressToken(decodedToken) | ||
| if err != nil { | ||
| return nil, serverErrors.ErrInvalidContinuationToken | ||
| } | ||
| var token listObjectsToken | ||
| if err := json.Unmarshal(decompressed, &token); err != nil { | ||
| return nil, serverErrors.ErrInvalidContinuationToken | ||
| } | ||
| previouslyReturned = make(map[string]struct{}, len(token.Objects)) | ||
| for _, obj := range token.Objects { | ||
| previouslyReturned[obj] = struct{}{} | ||
| } | ||
| } |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The continuation token decoding and deduplication logic (lines 584-607) lacks integration test coverage. While there are unit tests for token encoding/decoding, there are no tests that verify the full pagination flow: making a request with a continuation token, ensuring duplicates are skipped, and verifying that subsequent requests return the correct results. Consider adding integration tests that simulate multiple paginated requests with continuation tokens.
| // Tests for pagination support | ||
|
|
||
| func TestListObjectsTokenEncodeDecode(t *testing.T) { | ||
| t.Run("token_roundtrip_with_compression", func(t *testing.T) { | ||
| // Create a query with default encoder | ||
| checkResolver := graph.NewLocalChecker() | ||
| q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID) | ||
| require.NoError(t, err) | ||
| require.NotNil(t, q) | ||
|
|
||
| // Verify encoder is set | ||
| require.NotNil(t, q.encoder) | ||
|
|
||
| // Test encoding a token with compression | ||
| token := listObjectsToken{Objects: []string{"folder:1", "folder:2", "folder:3"}} | ||
| tokenBytes, err := json.Marshal(token) | ||
| require.NoError(t, err) | ||
|
|
||
| // Compress the token | ||
| compressed, err := compressToken(tokenBytes) | ||
| require.NoError(t, err) | ||
|
|
||
| // Base64 encode | ||
| encoded, err := q.encoder.Encode(compressed) | ||
| require.NoError(t, err) | ||
| require.NotEmpty(t, encoded) | ||
|
|
||
| // Decode and decompress | ||
| decoded, err := q.encoder.Decode(encoded) | ||
| require.NoError(t, err) | ||
|
|
||
| decompressed, err := decompressToken(decoded) | ||
| require.NoError(t, err) | ||
|
|
||
| var decodedToken listObjectsToken | ||
| err = json.Unmarshal(decompressed, &decodedToken) | ||
| require.NoError(t, err) | ||
|
|
||
| require.Equal(t, token.Objects, decodedToken.Objects) | ||
| }) | ||
|
|
||
| t.Run("compression_reduces_size", func(t *testing.T) { | ||
| // Create a token with many similar object IDs (highly compressible) | ||
| var objects []string | ||
| for i := 0; i < 100; i++ { | ||
| objects = append(objects, "document:"+strconv.Itoa(i)) | ||
| } | ||
| token := listObjectsToken{Objects: objects} | ||
| tokenBytes, err := json.Marshal(token) | ||
| require.NoError(t, err) | ||
|
|
||
| compressed, err := compressToken(tokenBytes) | ||
| require.NoError(t, err) | ||
|
|
||
| // Compressed should be significantly smaller | ||
| compressionRatio := float64(len(compressed)) / float64(len(tokenBytes)) | ||
| t.Logf("Original size: %d bytes, Compressed: %d bytes, Ratio: %.2f", | ||
| len(tokenBytes), len(compressed), compressionRatio) | ||
|
|
||
| // Expect at least 50% compression for repetitive object IDs | ||
| require.Less(t, compressionRatio, 0.7, "Expected better compression ratio") | ||
| }) | ||
|
|
||
| t.Run("empty_token", func(t *testing.T) { | ||
| checkResolver := graph.NewLocalChecker() | ||
| q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID) | ||
| require.NoError(t, err) | ||
|
|
||
| token := listObjectsToken{Objects: []string{}} | ||
| tokenBytes, err := json.Marshal(token) | ||
| require.NoError(t, err) | ||
|
|
||
| compressed, err := compressToken(tokenBytes) | ||
| require.NoError(t, err) | ||
|
|
||
| encoded, err := q.encoder.Encode(compressed) | ||
| require.NoError(t, err) | ||
| require.NotEmpty(t, encoded) | ||
|
|
||
| decoded, err := q.encoder.Decode(encoded) | ||
| require.NoError(t, err) | ||
|
|
||
| decompressed, err := decompressToken(decoded) | ||
| require.NoError(t, err) | ||
|
|
||
| var decodedToken listObjectsToken | ||
| err = json.Unmarshal(decompressed, &decodedToken) | ||
| require.NoError(t, err) | ||
| require.Empty(t, decodedToken.Objects) | ||
| }) | ||
| } | ||
|
|
||
| func TestWithListObjectsEncoder(t *testing.T) { | ||
| t.Run("custom_encoder_option", func(t *testing.T) { | ||
| checkResolver := graph.NewLocalChecker() | ||
| customEncoder := encoder.NewBase64Encoder() | ||
|
|
||
| q, err := NewListObjectsQuery( | ||
| memory.New(), | ||
| checkResolver, | ||
| fakeStoreID, | ||
| WithListObjectsEncoder(customEncoder), | ||
| ) | ||
| require.NoError(t, err) | ||
| require.NotNil(t, q) | ||
| require.Equal(t, customEncoder, q.encoder) | ||
| }) | ||
| } | ||
|
|
||
| func TestListObjectsResponseHasContinuationToken(t *testing.T) { | ||
| t.Run("response_includes_continuation_token_field", func(t *testing.T) { | ||
| // Verify the ListObjectsResponse struct has the ContinuationToken field | ||
| response := ListObjectsResponse{ | ||
| Objects: []string{"folder:1", "folder:2"}, | ||
| ContinuationToken: "test_token", | ||
| } | ||
|
|
||
| require.Equal(t, "test_token", response.ContinuationToken) | ||
| require.Len(t, response.Objects, 2) | ||
| }) | ||
|
|
||
| t.Run("empty_continuation_token_when_complete", func(t *testing.T) { | ||
| response := ListObjectsResponse{ | ||
| Objects: []string{"folder:1"}, | ||
| ContinuationToken: "", // Empty means all results returned | ||
| } | ||
|
|
||
| require.Empty(t, response.ContinuationToken) | ||
| }) | ||
| } |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests in this section only verify token encoding/decoding mechanics and the struct field existence. There are no integration tests that verify the end-to-end pagination behavior, such as:
- Making a ListObjects request that hits the max results limit and returns a continuation token
- Using that token in a subsequent request and verifying no duplicates are returned
- Verifying that all expected objects are eventually returned across multiple paginated requests
- Testing pagination when the deadline is exceeded
Consider adding integration tests similar to those in list_stores_test.go that test the complete pagination flow.
| token := listObjectsToken{Objects: allObjects} | ||
| tokenBytes, err := json.Marshal(token) | ||
| if err == nil { | ||
| // Compress token to reduce size (object IDs are highly compressible) | ||
| compressed, err := compressToken(tokenBytes) | ||
| if err == nil { | ||
| encoded, err := q.encoder.Encode(compressed) | ||
| if err == nil { | ||
| res.ContinuationToken = encoded | ||
| } | ||
| } |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This token generation logic is duplicated in lines 766-789. Consider extracting it into a helper method to improve maintainability and reduce code duplication. The method could be named generateContinuationToken and accept parameters like allReturnedObjects, previouslyReturned, and return the encoded token string and error.
| token := listObjectsToken{Objects: allObjects} | |
| tokenBytes, err := json.Marshal(token) | |
| if err == nil { | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err == nil { | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err == nil { | |
| res.ContinuationToken = encoded | |
| } | |
| } | |
| encoded, err := generateContinuationToken(q.encoder, allObjects) | |
| if err == nil { | |
| res.ContinuationToken = encoded |
| if err == nil { | ||
| // Compress token to reduce size (object IDs are highly compressible) | ||
| compressed, err := compressToken(tokenBytes) | ||
| if err == nil { | ||
| encoded, err := q.encoder.Encode(compressed) | ||
| if err == nil { | ||
| res.ContinuationToken = encoded | ||
| } | ||
| } | ||
| } |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Errors during token generation are silently ignored here. This is inconsistent with other commands like list_stores.go:63-65 where encoding errors return serverErrors.HandleError("", err). Silent failure could mislead clients into thinking all results were returned when pagination actually failed. Consider handling errors consistently by returning them to the client or at minimum logging them using q.logger.Error().
| if err == nil { | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err == nil { | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err == nil { | |
| res.ContinuationToken = encoded | |
| } | |
| } | |
| } | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| // Compress token to reduce size (object IDs are highly compressible) | |
| compressed, err := compressToken(tokenBytes) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| encoded, err := q.encoder.Encode(compressed) | |
| if err != nil { | |
| return nil, serverErrors.HandleError("", err) | |
| } | |
| res.ContinuationToken = encoded |
| // TODO: Get continuation token from request once proto is updated | ||
| // For now, this prepares the server-side logic |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This TODO comment indicates that the continuation token functionality is partially implemented but not yet wired to the request. However, the PR description states this is blocked by an API change (openfga/api#238). Consider updating this comment to reference the blocking PR and provide a clearer explanation of what needs to be done once the proto is updated, such as: "TODO: Wire continuation token from req.GetContinuationToken() once openfga/api#238 is merged".
| // TODO: Get continuation token from request once proto is updated | |
| // For now, this prepares the server-side logic | |
| // TODO: Wire continuation token from req.GetContinuationToken() once https://github.com/openfga/api/pull/238 is merged. | |
| // For now, this prepares the server-side logic. |
| // Expect at least 50% compression for repetitive object IDs | ||
| require.Less(t, compressionRatio, 0.7, "Expected better compression ratio") |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The compression ratio test assertion (expecting < 0.7) could be fragile if gzip implementation details change or if the test data pattern changes. Consider making this test more lenient (e.g., < 0.8 or < 0.9) or documenting why 0.7 specifically is required. Alternatively, just verify that compression occurred (compressed size < original size) without asserting a specific ratio, since the goal is to ensure compression is working, not to guarantee a specific compression level.
| // Expect at least 50% compression for repetitive object IDs | |
| require.Less(t, compressionRatio, 0.7, "Expected better compression ratio") | |
| // Assert that compression actually reduced the size | |
| require.Less(t, len(compressed), len(tokenBytes), "Compression should reduce size") |
| // For now, this prepares the server-side logic | ||
| var previouslyReturned map[string]struct{} | ||
| continuationToken := "" // Will be populated from req once proto is updated | ||
| if continuationToken != "" { |
Copilot
AI
Nov 26, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This expression compares an expression to itself.
Description
What problem is being solved?
The current
ListObjectsAPI silently returns partial results when:listObjectsDeadline) is exceededlistObjectsMaxResults) is reachedClients have no way to detect incomplete scans or resume fetching remaining objects. This breaks production use cases where users may have access to large numbers of objects and correctness/completeness are critical.
How is it being solved?
Adding resumable cursor-based pagination using continuation tokens. The token contains gzip-compressed object IDs that have already been returned, enabling:
What changes are made to solve it?
pkg/server/commands/list_objects.gopkg/server/commands/list_objects_test.goKey implementation details:
listObjectsTokenstruct stores previously returned object IDsReferences
Review Checklist
mainSummary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.