Skip to content

Conversation

@KTS-o7
Copy link

@KTS-o7 KTS-o7 commented Nov 26, 2025

Description

What problem is being solved?

The current ListObjects API silently returns partial results when:

  1. The server deadline (listObjectsDeadline) is exceeded
  2. The max results limit (listObjectsMaxResults) is reached

Clients have no way to detect incomplete scans or resume fetching remaining objects. This breaks production use cases where users may have access to large numbers of objects and correctness/completeness are critical.

How is it being solved?

Adding resumable cursor-based pagination using continuation tokens. The token contains gzip-compressed object IDs that have already been returned, enabling:

  • Detection of incomplete results (non-empty token = more results available)
  • Resumable pagination without duplicates
  • Compatibility with the graph-traversal nature of ListObjects (no simple DB offset possible)

What changes are made to solve it?

File Changes
pkg/server/commands/list_objects.go Add encoder, token struct, compression helpers, pagination logic
pkg/server/commands/list_objects_test.go Unit tests for token encoding/decoding and compression

Key implementation details:

  • listObjectsToken struct stores previously returned object IDs
  • Gzip compression achieves 82% size reduction (1,397 → 248 bytes for 100 objects)
  • Token generated when deadline exceeded OR max results reached
  • Duplicates skipped when resuming with a token

References

Review Checklist

  • I have clicked on "allow edits by maintainers".
  • I have added documentation for new/changed functionality in this PR or in a PR to openfga.dev
  • The correct base branch is being used, if not main
  • I have added tests to validate that the change in functionality is working as expected

Summary by CodeRabbit

  • New Features

    • ListObjects API now supports pagination through continuation tokens, allowing clients to retrieve large result sets incrementally and resume listing from the last retrieved position.
  • Tests

    • Added comprehensive test coverage for pagination functionality and token handling to ensure reliability of the new feature.

✏️ Tip: You can customize this high-level summary in your review settings.

This commit adds server-side support for resumable pagination in the
ListObjects API to address the issue where partial results are silently
returned when internal deadlines are exceeded.

Changes:
- Add encoder field to ListObjectsQuery for token encoding/decoding
- Add listObjectsToken struct to track previously returned objects
- Add gzip compression for tokens (82% size reduction for 100 objects)
- Add ContinuationToken field to ListObjectsResponse
- Generate continuation token when deadline exceeded OR max results hit
- Skip duplicate objects when resuming with a continuation token
- Add comprehensive unit tests for token handling

The token contains all previously returned object IDs (compressed) to
ensure no duplicates are returned across paginated requests. This
approach guarantees correctness while keeping token sizes manageable
through gzip compression.

Note: Full integration requires proto changes in openfga/api repo to
add continuation_token field to ListObjectsRequest/Response.

Relates to: openfga#2828
Copilot AI review requested due to automatic review settings November 26, 2025 09:11
@KTS-o7 KTS-o7 requested a review from a team as a code owner November 26, 2025 09:11
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Nov 26, 2025

CLA Signed
The committers listed above are authorized under a signed CLA.

  • ✅ login: KTS-o7 / name: Krishnatejaswi S (b64d84a)

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 26, 2025

Walkthrough

Introduced cursor-based pagination to ListObjects API via continuation tokens. Tokens capture previously returned object IDs, compressed via gzip, and encoded using a pluggable encoder. Pagination logic deduplicates results across page boundaries and generates tokens when operations may be incomplete.

Changes

Cohort / File(s) Summary
Pagination Implementation
pkg/server/commands/list_objects.go
Added listObjectsToken structure for capturing pagination state. Introduced Encoder field to ListObjectsQuery and ContinuationToken field to ListObjectsResponse. Implemented WithListObjectsEncoder option and token compression/decompression helpers (compressToken, decompressToken). Integrated duplicate-skipping logic and token-generation into both pipeline and non-pipeline ListObjects execution paths.
Test Coverage
pkg/server/commands/list_objects_test.go
Added tests validating token encoding/decoding round-trips, compression effectiveness on repetitive IDs, custom encoder injection via options, and continuation token presence in responses. Tests verify empty tokens indicate completion.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Server
    participant Encoder

    Client->>Server: ListObjects(query, continuation_token?)
    
    alt Token provided
        Server->>Encoder: Decode & decompress token
        Encoder-->>Server: Previously returned IDs
        Server->>Server: Skip already-returned objects
    end
    
    Server->>Server: Collect results until deadline or limit
    
    alt More results available
        Server->>Server: Merge previously returned IDs with current results
        Server->>Server: Marshal & compress token
        Server->>Encoder: Encode token
        Encoder-->>Server: Encoded continuation token
        Server-->>Client: Response(objects, continuation_token)
    else All results collected
        Server-->>Client: Response(objects, empty_token)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Token lifecycle and state management: Verify listObjectsToken structure correctly captures and recovers pagination state, including deduplication logic across page boundaries.
  • Compression and encoding integration: Ensure gzip compression/decompression and encoder pattern are correctly applied; validate token size reduction and round-trip correctness.
  • Dual code path consistency: Confirm pagination behavior (duplicate skipping, token generation) is consistently implemented in both pipeline-based and non-pipeline ListObjects execution paths.
  • Public API surface changes: Review new public fields (Encoder, ContinuationToken) and WithListObjectsEncoder option for clarity and backward compatibility.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.44% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main objective: adding pagination and continuation-token support to ListObjects API, matching the core changes in the PR.
Linked Issues check ✅ Passed The PR implements core requirements from #2828: continuation token in responses [#2828], cursor-based pagination [#2828], duplicate-skipping on resume [#2828], and token compression [#2828]. Tests validate token encoding/decoding and compression [#2828].
Out of Scope Changes check ✅ Passed All changes directly support pagination/continuation-token objectives. The encoder field, ContinuationToken response field, compression helpers, and token generation logic all align with #2828 requirements.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dosubot
Copy link

dosubot bot commented Nov 26, 2025

Related Documentation

Checked 6 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
pkg/server/commands/list_objects_test.go (2)

1135-1223: Consider adding test cases for malformed/corrupted tokens.

The token encoding tests cover the happy path well. Consider adding negative test cases:

  • Malformed base64 input
  • Valid base64 but invalid gzip data
  • Valid gzip but invalid JSON structure

This would ensure decompressToken and JSON unmarshaling errors are properly handled and returned as ErrInvalidContinuationToken.

Example test case to add:

t.Run("invalid_compressed_data", func(t *testing.T) {
    checkResolver := graph.NewLocalChecker()
    q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID)
    require.NoError(t, err)

    // Valid base64 but not valid gzip
    invalidData := []byte("not gzip data")
    encoded, err := q.encoder.Encode(invalidData)
    require.NoError(t, err)

    decoded, err := q.encoder.Decode(encoded)
    require.NoError(t, err)

    _, err = decompressToken(decoded)
    require.Error(t, err)
})

1242-1262: Add functional tests for pagination behavior in Execute.

These tests verify the struct fields exist but don't exercise the actual pagination logic in Execute. Consider adding tests that:

  1. Verify a continuation token is generated when maxResults is reached
  2. Verify a continuation token is generated on deadline exceeded
  3. Verify previously returned objects are skipped when resuming with a token

Once the proto is updated to accept continuation_token in the request, these functional tests will be essential for validating end-to-end behavior.

Would you like me to generate skeleton tests for these scenarios that can be completed once the proto changes land?

pkg/server/commands/list_objects.go (3)

122-126: Token stores returned IDs but not traversal state.

The current design re-traverses the graph on each paginated request and filters duplicates. Per the linked issue #2828, the ideal design would "track returned object IDs and traversal state" to avoid redundant work.

This approach is valid for an initial implementation, but be aware that for large graphs:

  1. Token size grows linearly with results (though gzip helps)
  2. CPU/IO cost is repeated on each page

Consider documenting this as a known limitation or future optimization opportunity.


752-757: Duplicate skip logic across code paths.

The skip logic at lines 674-679 and 752-757 is identical. Consider extracting to a helper function to reduce duplication and ensure consistent behavior.

func shouldSkipObject(objectID string, previouslyReturned map[string]struct{}) bool {
    if previouslyReturned == nil {
        return false
    }
    _, seen := previouslyReturned[objectID]
    return seen
}

766-789: Duplicate token generation logic; same error handling concern.

This token generation code is duplicated from lines 693-716 (pipeline path). The same silent error swallowing issue applies here.

Consider extracting to a shared helper:

func (q *ListObjectsQuery) generateContinuationToken(
    objects []string,
    previouslyReturned map[string]struct{},
) (string, error) {
    allObjects := make([]string, 0, len(objects)+len(previouslyReturned))
    allObjects = append(allObjects, objects...)
    for obj := range previouslyReturned {
        allObjects = append(allObjects, obj)
    }
    token := listObjectsToken{Objects: allObjects}
    tokenBytes, err := json.Marshal(token)
    if err != nil {
        return "", err
    }
    compressed, err := compressToken(tokenBytes)
    if err != nil {
        return "", err
    }
    return q.encoder.Encode(compressed)
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28ed51f and b64d84a.

📒 Files selected for processing (2)
  • pkg/server/commands/list_objects.go (10 hunks)
  • pkg/server/commands/list_objects_test.go (3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-21T19:29:57.962Z
Learnt from: Vic-Dev
Repo: openfga/openfga PR: 2749
File: pkg/server/commands/list_objects.go:557-571
Timestamp: 2025-10-21T19:29:57.962Z
Learning: In the experimental pipeline path for ListObjects (pkg/server/commands/list_objects.go), when `wgraph != nil && q.pipelineEnabled` is true, the implementation intentionally does not enforce `q.listObjectsMaxResults` and returns all results found.

Applied to files:

  • pkg/server/commands/list_objects.go
🧬 Code graph analysis (1)
pkg/server/commands/list_objects.go (3)
pkg/encoder/encoder.go (1)
  • Encoder (9-12)
pkg/encoder/base64.go (1)
  • NewBase64Encoder (16-18)
pkg/server/errors/errors.go (1)
  • ErrInvalidContinuationToken (24-24)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: CodeQL analysis (go)
  • GitHub Check: Agent
🔇 Additional comments (3)
pkg/server/commands/list_objects_test.go (1)

1225-1240: LGTM!

The test correctly verifies that the custom encoder option is applied to the query.

pkg/server/commands/list_objects.go (2)

243-248: LGTM!

The encoder option follows the established pattern in the codebase and the default Base64Encoder aligns with existing pagination implementations like read.go.


584-607: Placeholder acknowledged; verify integration once proto is updated.

The token decoding logic is well-structured and handles errors appropriately by returning ErrInvalidContinuationToken. Once openfga/api PR #238 lands, remember to:

  1. Replace continuationToken := "" with req.GetContinuationToken() (or equivalent)
  2. Add integration tests exercising the full pagination flow

Comment on lines +142 to +150
// decompressToken decompresses a gzip-compressed token.
func decompressToken(data []byte) ([]byte, error) {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, err
}
defer reader.Close()
return io.ReadAll(reader)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Check error from gzip reader Close().

gzip.Reader.Close() can return an error if the gzip checksum validation fails, which would indicate data corruption. The deferred Close() ignores this error.

 func decompressToken(data []byte) ([]byte, error) {
 	reader, err := gzip.NewReader(bytes.NewReader(data))
 	if err != nil {
 		return nil, err
 	}
-	defer reader.Close()
-	return io.ReadAll(reader)
+	decompressed, err := io.ReadAll(reader)
+	if err != nil {
+		reader.Close()
+		return nil, err
+	}
+	if err := reader.Close(); err != nil {
+		return nil, err
+	}
+	return decompressed, nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// decompressToken decompresses a gzip-compressed token.
func decompressToken(data []byte) ([]byte, error) {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, err
}
defer reader.Close()
return io.ReadAll(reader)
}
// decompressToken decompresses a gzip-compressed token.
func decompressToken(data []byte) ([]byte, error) {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, err
}
decompressed, err := io.ReadAll(reader)
if err != nil {
reader.Close()
return nil, err
}
if err := reader.Close(); err != nil {
return nil, err
}
return decompressed, nil
}
🤖 Prompt for AI Agents
In pkg/server/commands/list_objects.go around lines 142 to 150, the deferred
gzip.Reader.Close() ignores errors (checksum/data-corruption failures); change
the function to not defer Close but instead call reader.Close() after
io.ReadAll, capture both read and close errors, and return the read error if
present otherwise return the close error (or nil if neither); ensure the reader
is still closed if ReadAll returns an error by calling Close() in a final step
before returning.

Comment on lines +693 to +716
// Generate continuation token if results may be incomplete
// This happens when: deadline exceeded OR max results limit reached
deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded
hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults
if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 {
// Combine previously returned objects with newly returned objects
// This ensures no duplicates across paginated requests
allObjects := res.Objects
for obj := range previouslyReturned {
allObjects = append(allObjects, obj)
}
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Silent error swallowing during token generation could cause data loss.

If json.Marshal, compressToken, or encoder.Encode fails, the error is silently ignored and the response is sent without a continuation token. The client will assume results are complete when they're not, causing silent data truncation.

Consider returning an error or logging a warning:

 	if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 {
 		allObjects := res.Objects
 		for obj := range previouslyReturned {
 			allObjects = append(allObjects, obj)
 		}
 		token := listObjectsToken{Objects: allObjects}
 		tokenBytes, err := json.Marshal(token)
-		if err == nil {
-			compressed, err := compressToken(tokenBytes)
-			if err == nil {
-				encoded, err := q.encoder.Encode(compressed)
-				if err == nil {
-					res.ContinuationToken = encoded
-				}
-			}
-		}
+		if err != nil {
+			return nil, serverErrors.HandleError("failed to marshal continuation token", err)
+		}
+		compressed, err := compressToken(tokenBytes)
+		if err != nil {
+			return nil, serverErrors.HandleError("failed to compress continuation token", err)
+		}
+		encoded, err := q.encoder.Encode(compressed)
+		if err != nil {
+			return nil, serverErrors.HandleError("failed to encode continuation token", err)
+		}
+		res.ContinuationToken = encoded
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Generate continuation token if results may be incomplete
// This happens when: deadline exceeded OR max results limit reached
deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded
hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults
if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 {
// Combine previously returned objects with newly returned objects
// This ensures no duplicates across paginated requests
allObjects := res.Objects
for obj := range previouslyReturned {
allObjects = append(allObjects, obj)
}
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
}
}
// Generate continuation token if results may be incomplete
// This happens when: deadline exceeded OR max results limit reached
deadlineExceeded := timeoutCtx.Err() == context.DeadlineExceeded
hitMaxResults := maxResults > 0 && uint32(len(res.Objects)) >= maxResults
if (deadlineExceeded || hitMaxResults) && len(res.Objects) > 0 {
// Combine previously returned objects with newly returned objects
// This ensures no duplicates across paginated requests
allObjects := res.Objects
for obj := range previouslyReturned {
allObjects = append(allObjects, obj)
}
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
if err != nil {
return nil, serverErrors.HandleError("failed to marshal continuation token", err)
}
compressed, err := compressToken(tokenBytes)
if err != nil {
return nil, serverErrors.HandleError("failed to compress continuation token", err)
}
encoded, err := q.encoder.Encode(compressed)
if err != nil {
return nil, serverErrors.HandleError("failed to encode continuation token", err)
}
res.ContinuationToken = encoded
}
🤖 Prompt for AI Agents
In pkg/server/commands/list_objects.go around lines 693-716, the code currently
swallows errors from json.Marshal, compressToken, and encoder.Encode when
building the continuation token; change this so failures are not silent by
checking each err and returning a wrapped/internal error (or at minimum logging
a clear warning) instead of ignoring it — on Marshal/compress/Encode failure log
details (object count, bucket, prefix) and return an error from the handler so
the client isn’t misled into thinking results are complete; ensure callers
handle the returned error and add/update tests to cover token-generation
failures.

Copilot finished reviewing on behalf of KTS-o7 November 26, 2025 09:15
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds continuation token-based pagination support to the ListObjects API to address the issue where clients cannot detect incomplete results or resume fetching when server deadlines or max result limits are reached. The implementation uses gzip-compressed, base64-encoded tokens containing previously returned object IDs to enable resumable pagination.

  • Adds pagination infrastructure with token encoding/decoding and gzip compression
  • Implements token generation when deadline is exceeded or max results are reached
  • Adds duplicate detection logic to skip previously returned objects on pagination resume

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 10 comments.

File Description
pkg/server/commands/list_objects.go Adds listObjectsToken struct, compression helpers (compressToken/decompressToken), encoder field to ListObjectsQuery, token generation logic in both code paths, and duplicate-skipping logic
pkg/server/commands/list_objects_test.go Adds unit tests for token encoding/decoding, compression ratio validation, encoder option testing, and response struct field verification

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +700 to +701
allObjects := res.Objects
for obj := range previouslyReturned {
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic incorrectly combines objects. allObjects is assigned res.Objects (the newly returned objects), then the previously returned objects are appended to it. This means the token will contain newly returned objects first, followed by previously returned objects. However, the correct behavior should be to store ALL objects that have been returned across all requests. The current implementation will cause allObjects to grow unboundedly across pagination requests, as it keeps accumulating objects. The token should only contain the complete set of unique objects seen so far, not duplicates.

Suggested change
allObjects := res.Objects
for obj := range previouslyReturned {
// Build a set of unique objects from both res.Objects and previouslyReturned
uniqueObjects := make(map[string]struct{})
for _, obj := range res.Objects {
uniqueObjects[obj] = struct{}{}
}
for obj := range previouslyReturned {
uniqueObjects[obj] = struct{}{}
}
allObjects := make([]string, 0, len(uniqueObjects))
for obj := range uniqueObjects {

Copilot uses AI. Check for mistakes.
Comment on lines +771 to +774
// Combine previously returned objects with newly returned objects
// This ensures no duplicates across paginated requests
allObjects := listObjectsResponse.Objects
for obj := range previouslyReturned {
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic incorrectly combines objects. allObjects is assigned listObjectsResponse.Objects (the newly returned objects), then the previously returned objects are appended to it. This means the token will contain newly returned objects first, followed by previously returned objects. However, the correct behavior should be to store ALL objects that have been returned across all requests. The current implementation will cause allObjects to grow unboundedly across pagination requests, as it keeps accumulating objects. The token should only contain the complete set of unique objects seen so far, not duplicates.

Suggested change
// Combine previously returned objects with newly returned objects
// This ensures no duplicates across paginated requests
allObjects := listObjectsResponse.Objects
for obj := range previouslyReturned {
// Combine previously returned objects with newly returned objects, ensuring uniqueness
uniqueObjects := make(map[string]struct{})
// Add previously returned objects
for obj := range previouslyReturned {
uniqueObjects[obj] = struct{}{}
}
// Add newly returned objects
for _, obj := range listObjectsResponse.Objects {
uniqueObjects[obj] = struct{}{}
}
// Convert map keys to slice
allObjects := make([]string, 0, len(uniqueObjects))
for obj := range uniqueObjects {

Copilot uses AI. Check for mistakes.
Comment on lines +779 to +788
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
listObjectsResponse.ContinuationToken = encoded
}
}
}
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errors during token generation are silently ignored here. This is inconsistent with other commands like list_stores.go:63-65 where encoding errors return serverErrors.HandleError("", err). Silent failure could mislead clients into thinking all results were returned when pagination actually failed. Consider handling errors consistently by returning them to the client or at minimum logging them using q.logger.Error().

Suggested change
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
listObjectsResponse.ContinuationToken = encoded
}
}
}
if err != nil {
return nil, serverErrors.HandleError("", err)
}
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err != nil {
return nil, serverErrors.HandleError("", err)
}
encoded, err := q.encoder.Encode(compressed)
if err != nil {
return nil, serverErrors.HandleError("", err)
}
listObjectsResponse.ContinuationToken = encoded

Copilot uses AI. Check for mistakes.
Comment on lines +584 to +607
// Decode continuation token if provided (for pagination support)
// TODO: Get continuation token from request once proto is updated
// For now, this prepares the server-side logic
var previouslyReturned map[string]struct{}
continuationToken := "" // Will be populated from req once proto is updated
if continuationToken != "" {
decodedToken, err := q.encoder.Decode(continuationToken)
if err != nil {
return nil, serverErrors.ErrInvalidContinuationToken
}
// Decompress the token (tokens are gzip compressed to reduce size)
decompressed, err := decompressToken(decodedToken)
if err != nil {
return nil, serverErrors.ErrInvalidContinuationToken
}
var token listObjectsToken
if err := json.Unmarshal(decompressed, &token); err != nil {
return nil, serverErrors.ErrInvalidContinuationToken
}
previouslyReturned = make(map[string]struct{}, len(token.Objects))
for _, obj := range token.Objects {
previouslyReturned[obj] = struct{}{}
}
}
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The continuation token decoding and deduplication logic (lines 584-607) lacks integration test coverage. While there are unit tests for token encoding/decoding, there are no tests that verify the full pagination flow: making a request with a continuation token, ensuring duplicates are skipped, and verifying that subsequent requests return the correct results. Consider adding integration tests that simulate multiple paginated requests with continuation tokens.

Copilot uses AI. Check for mistakes.
Comment on lines +1133 to +1262
// Tests for pagination support

func TestListObjectsTokenEncodeDecode(t *testing.T) {
t.Run("token_roundtrip_with_compression", func(t *testing.T) {
// Create a query with default encoder
checkResolver := graph.NewLocalChecker()
q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID)
require.NoError(t, err)
require.NotNil(t, q)

// Verify encoder is set
require.NotNil(t, q.encoder)

// Test encoding a token with compression
token := listObjectsToken{Objects: []string{"folder:1", "folder:2", "folder:3"}}
tokenBytes, err := json.Marshal(token)
require.NoError(t, err)

// Compress the token
compressed, err := compressToken(tokenBytes)
require.NoError(t, err)

// Base64 encode
encoded, err := q.encoder.Encode(compressed)
require.NoError(t, err)
require.NotEmpty(t, encoded)

// Decode and decompress
decoded, err := q.encoder.Decode(encoded)
require.NoError(t, err)

decompressed, err := decompressToken(decoded)
require.NoError(t, err)

var decodedToken listObjectsToken
err = json.Unmarshal(decompressed, &decodedToken)
require.NoError(t, err)

require.Equal(t, token.Objects, decodedToken.Objects)
})

t.Run("compression_reduces_size", func(t *testing.T) {
// Create a token with many similar object IDs (highly compressible)
var objects []string
for i := 0; i < 100; i++ {
objects = append(objects, "document:"+strconv.Itoa(i))
}
token := listObjectsToken{Objects: objects}
tokenBytes, err := json.Marshal(token)
require.NoError(t, err)

compressed, err := compressToken(tokenBytes)
require.NoError(t, err)

// Compressed should be significantly smaller
compressionRatio := float64(len(compressed)) / float64(len(tokenBytes))
t.Logf("Original size: %d bytes, Compressed: %d bytes, Ratio: %.2f",
len(tokenBytes), len(compressed), compressionRatio)

// Expect at least 50% compression for repetitive object IDs
require.Less(t, compressionRatio, 0.7, "Expected better compression ratio")
})

t.Run("empty_token", func(t *testing.T) {
checkResolver := graph.NewLocalChecker()
q, err := NewListObjectsQuery(memory.New(), checkResolver, fakeStoreID)
require.NoError(t, err)

token := listObjectsToken{Objects: []string{}}
tokenBytes, err := json.Marshal(token)
require.NoError(t, err)

compressed, err := compressToken(tokenBytes)
require.NoError(t, err)

encoded, err := q.encoder.Encode(compressed)
require.NoError(t, err)
require.NotEmpty(t, encoded)

decoded, err := q.encoder.Decode(encoded)
require.NoError(t, err)

decompressed, err := decompressToken(decoded)
require.NoError(t, err)

var decodedToken listObjectsToken
err = json.Unmarshal(decompressed, &decodedToken)
require.NoError(t, err)
require.Empty(t, decodedToken.Objects)
})
}

func TestWithListObjectsEncoder(t *testing.T) {
t.Run("custom_encoder_option", func(t *testing.T) {
checkResolver := graph.NewLocalChecker()
customEncoder := encoder.NewBase64Encoder()

q, err := NewListObjectsQuery(
memory.New(),
checkResolver,
fakeStoreID,
WithListObjectsEncoder(customEncoder),
)
require.NoError(t, err)
require.NotNil(t, q)
require.Equal(t, customEncoder, q.encoder)
})
}

func TestListObjectsResponseHasContinuationToken(t *testing.T) {
t.Run("response_includes_continuation_token_field", func(t *testing.T) {
// Verify the ListObjectsResponse struct has the ContinuationToken field
response := ListObjectsResponse{
Objects: []string{"folder:1", "folder:2"},
ContinuationToken: "test_token",
}

require.Equal(t, "test_token", response.ContinuationToken)
require.Len(t, response.Objects, 2)
})

t.Run("empty_continuation_token_when_complete", func(t *testing.T) {
response := ListObjectsResponse{
Objects: []string{"folder:1"},
ContinuationToken: "", // Empty means all results returned
}

require.Empty(t, response.ContinuationToken)
})
}
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests in this section only verify token encoding/decoding mechanics and the struct field existence. There are no integration tests that verify the end-to-end pagination behavior, such as:

  1. Making a ListObjects request that hits the max results limit and returns a continuation token
  2. Using that token in a subsequent request and verifying no duplicates are returned
  3. Verifying that all expected objects are eventually returned across multiple paginated requests
  4. Testing pagination when the deadline is exceeded

Consider adding integration tests similar to those in list_stores_test.go that test the complete pagination flow.

Copilot uses AI. Check for mistakes.
Comment on lines +704 to +714
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This token generation logic is duplicated in lines 766-789. Consider extracting it into a helper method to improve maintainability and reduce code duplication. The method could be named generateContinuationToken and accept parameters like allReturnedObjects, previouslyReturned, and return the encoded token string and error.

Suggested change
token := listObjectsToken{Objects: allObjects}
tokenBytes, err := json.Marshal(token)
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
encoded, err := generateContinuationToken(q.encoder, allObjects)
if err == nil {
res.ContinuationToken = encoded

Copilot uses AI. Check for mistakes.
Comment on lines +706 to +715
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
}
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errors during token generation are silently ignored here. This is inconsistent with other commands like list_stores.go:63-65 where encoding errors return serverErrors.HandleError("", err). Silent failure could mislead clients into thinking all results were returned when pagination actually failed. Consider handling errors consistently by returning them to the client or at minimum logging them using q.logger.Error().

Suggested change
if err == nil {
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err == nil {
encoded, err := q.encoder.Encode(compressed)
if err == nil {
res.ContinuationToken = encoded
}
}
}
if err != nil {
return nil, serverErrors.HandleError("", err)
}
// Compress token to reduce size (object IDs are highly compressible)
compressed, err := compressToken(tokenBytes)
if err != nil {
return nil, serverErrors.HandleError("", err)
}
encoded, err := q.encoder.Encode(compressed)
if err != nil {
return nil, serverErrors.HandleError("", err)
}
res.ContinuationToken = encoded

Copilot uses AI. Check for mistakes.
Comment on lines +585 to +586
// TODO: Get continuation token from request once proto is updated
// For now, this prepares the server-side logic
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO comment indicates that the continuation token functionality is partially implemented but not yet wired to the request. However, the PR description states this is blocked by an API change (openfga/api#238). Consider updating this comment to reference the blocking PR and provide a clearer explanation of what needs to be done once the proto is updated, such as: "TODO: Wire continuation token from req.GetContinuationToken() once openfga/api#238 is merged".

Suggested change
// TODO: Get continuation token from request once proto is updated
// For now, this prepares the server-side logic
// TODO: Wire continuation token from req.GetContinuationToken() once https://github.com/openfga/api/pull/238 is merged.
// For now, this prepares the server-side logic.

Copilot uses AI. Check for mistakes.
Comment on lines +1192 to +1193
// Expect at least 50% compression for repetitive object IDs
require.Less(t, compressionRatio, 0.7, "Expected better compression ratio")
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The compression ratio test assertion (expecting < 0.7) could be fragile if gzip implementation details change or if the test data pattern changes. Consider making this test more lenient (e.g., < 0.8 or < 0.9) or documenting why 0.7 specifically is required. Alternatively, just verify that compression occurred (compressed size < original size) without asserting a specific ratio, since the goal is to ensure compression is working, not to guarantee a specific compression level.

Suggested change
// Expect at least 50% compression for repetitive object IDs
require.Less(t, compressionRatio, 0.7, "Expected better compression ratio")
// Assert that compression actually reduced the size
require.Less(t, len(compressed), len(tokenBytes), "Compression should reduce size")

Copilot uses AI. Check for mistakes.
// For now, this prepares the server-side logic
var previouslyReturned map[string]struct{}
continuationToken := "" // Will be populated from req once proto is updated
if continuationToken != "" {
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This expression compares an expression to itself.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Add pagination and continuation-token support to ListObjects API

1 participant