-
Notifications
You must be signed in to change notification settings - Fork 78
Update java client version #428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # clickhouse-core/src/main/scala/com/clickhouse/spark/client/NodeClient.scala
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other comments (20)
- spark-3.5/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (71-72) The compression codec parameter has been commented out in the class definition, but there's still a reference to it in the commented code on line 72. Either remove the commented code entirely or update the implementation to handle compression with the new API.
- clickhouse-core/src/main/scala/com/clickhouse/spark/client/NodeClient.scala (176-183) The syncInsert method creates an empty ByteArrayInputStream (`is`) to pass to the deserializer after a successful insert, but this stream contains no data. The deserializer will likely fail when trying to process this empty stream. You should use the actual response data from the InsertResponse instead.
-
gradle.properties (16-16)
The new Maven snapshots repository URL `https://central.sonatype.com/repository/maven-snapshots/` appears to be incorrect. This is a web UI URL, not a repository URL that can be used for dependency resolution.
The correct URL format should be something like
https://s01.oss.sonatype.org/content/repositories/snapshots/(the previous value) orhttps://oss.sonatype.org/content/repositories/snapshots/.mavenSnapshotsRepo=https://s01.oss.sonatype.org/content/repositories/snapshots/ - spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (44-44) The compression codec parameter is commented out on line 44, but it seems the code might still need this parameter. The call to `nodeClient.queryAndCheck()` on line 72 has a comment suggesting the codec parameter was removed, but this might cause issues if compression is still needed elsewhere in the code.
- spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (74-74) The `totalBlocksRead` method now returns a hardcoded `0L` instead of getting the actual blocks read from the response. This will cause inaccurate metrics reporting. If the new API version doesn't support this metric, consider finding an alternative way to track blocks read or document this limitation.
-
spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (109-109)
Converting a String to bytes without specifying an encoding could lead to platform-dependent results. Consider checking if the value is already a byte array first, or specify an explicit encoding:
case BinaryType => value match { case bytes: Array[Byte] => bytes case str: String => str.getBytes("UTF-8") case _ => value.toString.getBytes("UTF-8") } - build.gradle (248-250) The PR changes the main dependency from `clickhouse-jdbc` to `client-v2` in the clickhouse-core project, but `clickhouse-jdbc` is still being used in the clickhouse-core-it project. This could lead to version conflicts or unexpected behavior if both libraries have different APIs or behaviors. Consider updating the test dependency to use the same client library as the main code.
- spark-3.5/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (109-109) Casting a String to bytes for BinaryType might cause encoding issues if the value is already binary data. Consider checking if the value is already a byte array first.
-
gradle.properties (26-27)
You're changing from a stable release version (`0.6.3`) to a SNAPSHOT version (`0.9.2-SNAPSHOT`) of the ClickHouse JDBC driver. Using SNAPSHOT dependencies in production code can lead to build instability and unpredictable behavior, as these versions can change without notice.
Is there a specific reason to use a SNAPSHOT instead of a stable release? If you need features from 0.9.x, consider waiting for a stable release or at least document why a SNAPSHOT is necessary.
- clickhouse-core/src/main/scala/com/clickhouse/spark/client/NodeClient.scala (175-179) There's a redundant creation of ByteArrayInputStream. You read all bytes into `payload` and then create a new ByteArrayInputStream with the same payload. You can simplify this by using the payload array directly in a single ByteArrayInputStream.
- spark-3.4/clickhouse-spark/src/main/scala/com/clickhouse/spark/write/ClickHouseWriter.scala (254-254) The comment `// , codec` doesn't provide any useful information. If this parameter was removed due to an API change in the Java client, it would be better to add a more descriptive comment explaining why the parameter was removed.
- spark-3.4/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (44-44) The compression codec parameter has been removed from the queryAndCheck method call, but the codec field is just commented out rather than properly removed. If compression is no longer needed with the updated Java client, consider removing the field entirely rather than commenting it out.
- clickhouse-core/src/main/scala/com/clickhouse/spark/client/NodeClient.scala (161-165) The `readAllBytes` implementation is inefficient as it reads one byte at a time and builds an array using Stream. Consider using Java's built-in `InputStream.readAllBytes()` method (available since Java 9) or a more efficient buffered reading approach.
- spark-3.4/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (109-109) Casting a String directly to bytes for BinaryType might cause encoding issues if the value is already binary data. Consider checking if the value is already a byte array first.
- spark-3.5/clickhouse-spark/src/main/scala/com/clickhouse/spark/write/ClickHouseWriter.scala (256-256) The commented out `codec` parameter should be removed since it's no longer needed with the updated Java client version.
- spark-3.5/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (89-89) This commented out code can be removed as it's not needed - the proper check is implemented on the next line.
- spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (76-76) The method to get bytes read has changed from `resp.getSummary.getReadBytes` to `resp.getReadBytes()`, but there's still a commented reference to the old method. Consider removing the comment to avoid confusion for future developers.
- spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (89-89) This commented out code should be removed as it's redundant with the active implementation on the next line that uses `isInstanceOf`.
-
spark-3.4/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/format/ClickHouseBinaryReader.scala (89-89)
This commented out line should be removed as it's not needed and the implementation is already using `isInstanceOf` in the next line.
case IntegerType if value.isInstanceOf[java.lang.Long] => - spark-3.4/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala (71-71) There's an inline comment `// , codec` on line 71 that appears to be a reminder of a removed parameter. Consider removing this comment or replacing it with a more descriptive explanation of why the parameter was removed.
💡 To request another review, post a new comment with "/windsurf-review".
| // , codec | ||
| client.syncInsertOutputJSONEachRow(database, table, format, new ByteArrayInputStream(data)) match { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a commented-out parameter and explanatory comment that should be removed before merging. If the codec parameter is no longer needed in the method call due to API changes, we should just remove it cleanly without leaving commented code.
| lazy val resp: QueryResponse = nodeClient.queryAndCheck(scanQuery, format) | ||
|
|
||
| def totalBlocksRead: Long = resp.getSummary.getStatistics.getBlocks | ||
| def totalBlocksRead: Long = 0L // resp.getSummary.getStatistics.getBlocks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The totalBlocksRead method now returns a hardcoded 0L instead of getting the actual value from the response. This will cause the BLOCKS_READ metric to always report 0, which could be misleading for monitoring and debugging purposes.
| lazy val resp: QueryResponse = nodeClient.queryAndCheck(scanQuery, format) | ||
|
|
||
| def totalBlocksRead: Long = resp.getSummary.getStatistics.getBlocks | ||
| def totalBlocksRead: Long = 0L // resp.getSummary.getStatistics.getBlocks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The totalBlocksRead method now returns a hardcoded 0L instead of getting the actual block count from the response. This will cause inaccurate metrics reporting. Consider updating this method to retrieve the block count from the new QueryResponse API.
spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala
Show resolved
Hide resolved
spark-3.3/clickhouse-spark/src/main/scala/com/clickhouse/spark/read/ClickHouseReader.scala
Show resolved
Hide resolved
* Wake up ClickHouse Cloud instance before tests (#429) * fix: Handle FixedString as plain text in JSON reader for all Spark versions Problem: ClickHouse returns FixedString as plain text in JSON format, but the connector was trying to decode it as Base64, causing InvalidFormatException. Solution: Use pattern matching with guard to check if the JSON node is textual. - If textual (FixedString): decode as UTF-8 bytes - If not textual (true binary): decode as Base64 Applied to Spark 3.3, 3.4, and 3.5. --------- Co-authored-by: Bentsi Leviav <[email protected]> Co-authored-by: Shimon Steinitz <[email protected]>
* Wake up ClickHouse Cloud instance before tests (#429) * feat: Add comprehensive read test coverage for Spark 3.3, 3.4, and 3.5 Add shared test trait ClickHouseReaderTestBase with 48 test scenarios covering: - All primitive types (Boolean, Byte, Short, Int, Long, Float, Double) - Large integers (UInt64, Int128, UInt128, Int256, UInt256) - Decimals (Decimal32, Decimal64, Decimal128) - Date/Time types (Date, Date32, DateTime, DateTime32, DateTime64) - String types (String, UUID, FixedString) - Enums (Enum8, Enum16) - IP addresses (IPv4, IPv6) - JSON data - Collections (Arrays, Maps) - Edge cases (empty strings, long strings, empty arrays, nullable variants) Test suites for Binary and JSON read formats. Test results: 96 tests per Spark version (288 total) - Binary format: 47/48 passing - JSON format: 47/48 passing - Overall: 94/96 passing per version (98% pass rate) Remaining failures are known bugs with fixes on separate branches. * feat: Add comprehensive write test coverage for Spark 3.3, 3.4, and 3.5 Add shared test trait ClickHouseWriterTestBase with 17 test scenarios covering: - Primitive types (Boolean, Byte, Short, Int, Long, Float, Double) - Decimal types - String types (regular and empty strings) - Date and Timestamp types - Collections (Arrays and Maps, including empty variants) - Nullable variants Test suites for JSON and Arrow write formats. Note: Binary write format is not supported (only JSON and Arrow). Test results: 34 tests per Spark version (102 total) - JSON format: 17/17 passing (100%) - Arrow format: 17/17 passing (100%) - Overall: 34/34 passing per version (100% pass rate) Known behavior: Boolean values write as BooleanType but read back as ShortType (0/1) due to ClickHouse storing Boolean as UInt8. * style: Apply spotless formatting * style: Apply spotless formatting for Spark 3.3 and 3.4 Remove trailing whitespace from test files to pass CI spotless checks. * fix: Change write format from binary to arrow in BinaryReaderSuite The 'binary' write format option doesn't exist. Changed to 'arrow' which is a valid write format option. Applied to Spark 3.3, 3.4, and 3.5. * test: Add nullable tests for ShortType, IntegerType, and LongType Added missing nullable variant tests to ensure comprehensive coverage: - decode ShortType - nullable with null values (Nullable(Int16)) - decode IntegerType - nullable with null values (Nullable(Int32)) - decode LongType - nullable with null values (Nullable(Int64)) These tests verify that nullable primitive types correctly handle NULL values in both Binary and JSON read formats. Applied to Spark 3.3, 3.4, and 3.5. Total tests per Spark version: 51 (was 48) Total across all versions: 153 (was 144) * Refactor ClickHouseReaderTestBase: Add nullable tests and organize alphabetically - Add missing nullable test cases for: Date32, Decimal32, Decimal128, UInt16, UUID, DateTime64 - Organize all 69 tests alphabetically by data type for better maintainability - Ensure comprehensive coverage with both nullable and non-nullable variants for all data types - Apply changes consistently across Spark 3.3, 3.4, and 3.5 * ci: Skip cloud tests on forks where secrets are unavailable Add repository check to cloud workflow to prevent failures on forks that don't have access to ClickHouse Cloud secrets. Tests will still run on the main repository where secrets are properly configured. * Refactor and enhance Reader/Writer tests for all Spark versions - Add BooleanType tests to Reader (2 tests) with format-aware assertions - Add 6 new tests to Writer: nested arrays, arrays with nullable elements, multiple Decimal precisions (18,4 and 38,10), Map with nullable values, and StructType - Reorder all tests lexicographically for better organization - Writer tests increased from 17 to 33 tests - Reader tests increased from 69 to 71 tests - Remove section header comments for cleaner code - Apply changes to all Spark versions: 3.3, 3.4, and 3.5 - All tests now properly sorted alphabetically by data type and variant * style: Apply spotless formatting to Reader/Writer tests --------- Co-authored-by: Bentsi Leviav <[email protected]> Co-authored-by: Shimon Steinitz <[email protected]>
- Fix DecimalType: Handle both BigInteger (Int256/UInt256) and BigDecimal (Decimal types) - Fix ArrayType: Direct call to BinaryStreamReader.ArrayValue.getArrayOfObjects() - Fix StringType: Handle UUID, InetAddress, and EnumValue types - Fix DateType: Handle both LocalDate and ZonedDateTime - Fix MapType: Handle all util.Map implementations Removed reflection and defensive pattern matching for better performance. All 34 Binary Reader test failures are now fixed (71/71 tests passing). Fixes compatibility with new Java client API in update-java-client-version branch.
- Add Decimal(18,4) test with 0.001 tolerance for JSON/Arrow formats - Documents precision limitation for decimals with >15-17 significant digits - Uses tolerance-based assertions to account for observed precision loss - Binary format preserves full precision (already tested in Binary Reader suite) - All 278 tests passing
|
Shimon Steinitz seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
7ea939c to
8beb2bc
Compare
- Convert mutable.ArraySeq to Array in ClickHouseJsonReader to ensure immutable collections - Add test workaround for Spark's Row.getSeq behavior in Scala 2.13 - Fix Spotless formatting: remove trailing whitespace in ClickHouseBinaryReader - Applied to all Spark versions: 3.3, 3.4, 3.5
982511d to
2687b1f
Compare
98365d4 to
18b4fcb
Compare
Summary
Checklist
Delete items not relevant to your PR: