-
Notifications
You must be signed in to change notification settings - Fork 3.3k
LibGfx/JBIG2+jbig2-from-json+Tests: Implement halftone "match_image" feature, add more halftone tests #26399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When I renamed the halftone region halftone keys, I forgot to update their spelling in some diagnostic messages. Update the diagnostics to use the right spellings.
No behavior change.
This will allow putting a non-default-constructible field in HalftoneRegionSegmentData. No behavior change.
This makes it possible to give a halftone region a reference image instead of a grid of indices. If that's done, the grid of indices is computed by finding the best-matching pattern for each covered region in the reference image. This can be used with a pattern dictionary that uses unique_image_tiles from SerenityOS#26299 to make exact test images that have fewer tiles than distinct_image_tiles and identity_tile_indices. It should also be possible to use this with a regular halftone dot pattern dictionary to get actual halftone images, but I haven't tried that yet. The matching is done in the writer instead of jbig2-from-image because jbig2-from-image does not have access to referred-to segments, and because this will eventually have to learn to deal with interesting grid vectors, and that logic is also all in the writer. (For now, interesting grid vectors are not supported, though.)
This follows up on SerenityOS#26299 to create a test image that stores just 88 tiles instead of the 625 in bitmap-halftone-10bpp.json.
| for (u32 y = 0; y < halftone_region.grayscale_height; ++y) { | ||
| for (u32 x = 0; x < halftone_region.grayscale_width; ++x) { | ||
| // Find best tile in pattern dictionary that matches reference best. | ||
| // FIXME: This is a naive, inefficient implementation. | ||
| u32 best_pattern_index = 0; | ||
| u32 best_pattern_difference = UINT32_MAX; | ||
| for (u32 pattern_index = 0; pattern_index <= pattern_dictionary.gray_max; ++pattern_index) { | ||
| u32 pattern_x = pattern_index * pattern_dictionary.pattern_width; | ||
| u32 pattern_difference = 0; | ||
| for (u32 py = 0; py < pattern_dictionary.pattern_height; ++py) { | ||
| for (u32 px = 0; px < pattern_dictionary.pattern_width; ++px) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
casual n^5 algo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The halftone region is a tiling of the reference image, so the outer x/y walks the reference image in tile increments and the inner walks the tile data. So in a way this does O(number of patterns) work per pixel (to find the best pattern to match each tile). I'd call this O(n^3).
Still, as the comment says, yes, this is inefficient. (It's not super duper slow in practice though, just regular slow: I tried covering Tests/LibGfx/test-inputs/jpg/big_image.jpg, a 4000x3000 image, with 16 possible 4x4 tiles, and that took less than a second.)
A lossless_halftone_region segment got added in SerenityOS#26399.
A lossless_halftone_region segment got added in #26399.
This makes it possible to give a halftone region a reference image
instead of a grid of indices. If that's done, the grid of indices is
computed by finding the best-matching pattern for each covered region
in the reference image.
This can be used with a pattern dictionary that uses unique_image_tiles
from #26299 to make exact test images that have fewer tiles than
distinct_image_tiles and identity_tile_indices.
It should also be possible to use this with a regular halftone dot
pattern dictionary to get actual halftone images, but I haven't tried
that yet.
The matching is done in the writer instead of jbig2-from-image because
jbig2-from-image does not have access to referred-to segments, and
because this will eventually have to learn to deal with interesting
grid vectors, and that logic is also all in the writer. (For now,
interesting grid vectors are not supported, though.)
Then use this to add a basic halftone test, and tests for templates 1-3 (in both pattern dictionary and halftone region).