-
Notifications
You must be signed in to change notification settings - Fork 6
Image filter (Node)
In this example we're going to build a server-side app that apply a filter on an image. It'll be a real server this time, one that accepts requests from the browser and sends different images based on parameters in the URL.
First, we initialize the project and add in the necessary modules:
mkdir filter
cd filter
npm init -y
npm install fastify sharp node-zigar
mkdir src zig imgWe'll be using Fastify, a modern alternative to Express.js, and Sharp, a popular image processing library.
After creating the basic skeleton, add index.js:
import Fastify from 'fastify';
import Sharp from 'sharp';
import { fileURLToPath } from 'url';
const fastify = Fastify();
fastify.get('/', (req, reply) => {
const name = 'sample';
const filter = 'sepia';
const tags = [
{ width: 150, height: 100, intensity: 0.0 },
{ width: 150, height: 100, intensity: 0.3 },
{ width: 300, height: 300, intensity: 0.2 },
{ width: 300, height: 300, intensity: 0.4 },
{ width: 400, height: 400, intensity: 0.3 },
{ width: 500, height: 200, intensity: 0.5 },
].map((params) => {
const json = JSON.stringify(params);
const base64 = Buffer.from(json).toString('base64');
const url = `img/${name}/${filter}/${base64}`;
return `<p><img src="${url}"></p>`;
});
reply.type('text/html');
return `
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Image filter test</title>
</head>
<body>${tags.join('')}</body>
</html>`;
});
fastify.get('/img/:name/:filter/:base64', async (req, reply) => {
const { name, filter, base64 } = req.params;
const json = Buffer.from(base64, 'base64');
const params = JSON.parse(json);
const url = new URL(`../img/${name}.png`, import.meta.url);
const path = fileURLToPath(url);
const { width, height, ...filterParams } = params;
// open image, resize it, and get raw data
const inputImage = Sharp(path).ensureAlpha().resize(width, height);
const { data, info } = await inputImage.raw().toBuffer({ resolveWithObject: true });
// place raw data into new image and output it as JPEG
const outputImage = Sharp(data, { raw: info, });
reply.type('image/jpeg');
return outputImage.jpeg().toBuffer();
});
const address = await fastify.listen({ port: 3000 });
console.log(`Listening at ${address}`);The root route / maps to an HTML page with a number of <img> tags referencing images at
different settings. The handler of /img/:name/:filter/:base64 generates these images. It
decompresses the source image, resizes it, and then obtains the raw pixel data. It then immediately
saves the data as a JPEG image. We'll add the filtering step after we've verified that the basic
code works.
To get our app to run, add the following to package.json:
"type": "module",
"scripts": {
"start": "node --loader=node-zigar --no-warnings src/index.js"
},Finally, download the following image into img as sample.png (or choose an image of your own):

We are ready to start the server:
npm run start
When you open the link, you should see the following:

Okay, now it's the time to implement the image filtering functionality. Download
sepia.zig
into the zig directory.
The code in question was translated from a Pixel Bender filter using pb2zig. Consult the intro page for an explanation of how it works.
In index.js, insert the following lines into the image route handler, right after the call to
inputImage.raw().toBuffer():
// push data through filter
const { createOutput } = await import(`../zig/${filter}.zig`);
const input = {
src: {
data,
width: info.width,
height: info.height,
}
};
const output = createOutput(info.width, info.height, input, filterParams);createOutput() has the follow declaration:
pub fn createOutput(
allocator: std.mem.Allocator,
width: u32,
height: u32,
input: Input,
params: Parameters,
) !Outputallocator is automatically provided by Zigar. width and height come from the object returned
by Sharp. filterParams is what remains after width and height have been taken out from the
params object, i.e. { intensity: [number] }.
Input is a parameterized type:
pub const Input = KernelInput(u8, kernel);Which expands to:
pub const Input = struct {
src: Image(u8, 4, false);
};Then further to:
pub const Input = struct {
src: struct {
pub const Pixel = @Vector(4, u8);
pub const FPixel = @Vector(4, f32);
pub const channels = 4;
data: []const Pixel,
width: u32,
height: u32,
colorSpace: ColorSpace = .srgb,
offset: usize = 0,
};
};So input.src.data is a slice pointer to four-wide u8 vectors, with each vector representing the
RGBA values of a pixel. Zigar can automatically cast the Buffer we received from Sharp into the
target type. That's why the initializer for the argument input is simply:
const input = {
src: {
data,
width: info.width,
height: info.height,
}
};Like Input, Output is a parameterized type. It too can potentially contain multiple images. In
this case (and most cases), there's only one:
pub const Output = struct {
dst: struct {
pub const Pixel = @Vector(4, u8);
pub const FPixel = @Vector(4, f32);
pub const channels = 4;
data: []Pixel,
width: u32,
height: u32,
colorSpace: ColorSpace = .srgb,
offset: usize = 0,
},
};dst.data points to memory allocated from allocator. Normally, the field would be represented by
a Zigar object on the JavaScript side. In sepia.zig however,
there's a meta-type declaration that changes this:
pub const @"meta(zigar)" = struct {
pub fn isFieldClampedArray(comptime T: type, comptime name: std.meta.FieldEnum(T)) bool {
if (@hasDecl(T, "Pixel")) {
// make field `data` clamped array if output pixel type is u8
if (@typeInfo(T.Pixel).vector.child == u8) {
return name == .data;
}
}
return false;
}
pub fn isFieldTypedArray(comptime T: type, comptime name: std.meta.FieldEnum(T)) bool {
if (@hasDecl(T, "Pixel")) {
// make field `data` typed array (if pixel value is not u8)
return name == .data;
}
return false;
}
pub fn isDeclPlain(comptime T: type, comptime _: std.meta.DeclEnum(T)) bool {
// make return value plain objects
return true;
}
};During the export process, when Zigar encounters a field consisting of u8's, it'll call
isFieldClampedArray() to see if you want it to be an
Uint8ClampedArray.
If the function does not exist or returns false, it'll try isFieldTypedArray() next, which
would make the field a regular
Uint8Array.
Meanwhile, isDeclPlain() makes the output of all export functions plain JavaScript objects. The
end result is that we get something like this from createOutput():
{
dst: {
data: Uint8ClampedArray(60000) [ ... ],
width: 150,
height: 100,
colorSpace: 'srgb'
}
}Which we can then use to create a new image:
// place raw data into new image and output it as JPEG
const outputImage = Sharp(output.dst.data, { raw: info });Without this metadata we would need to do this:
// place raw data into new image and output it as JPEG
const outputImage = Sharp(output.dst.data.clampedArray, { raw: info });Restart the server after making the needed changes. You should now see the following in the browser:

The function createOutput() is synchronous. When it's processing an image, it blocks Node's
event loop. This is generally undesirable. What we want to do instead is process the image in a
separate thread.
We'll first replace our import statement:
const { createOutput } = await import(`../zig/${filter}.zig`);with the following:
const {
createOutputAsync,
startThreadPool,
stopThreadPoolAsync
} = await import(`../zig/${filter}.zig`);
if (!deinitThreadPool) {
startThreadPool(availableParallelism());
deinitThreadPool = stopThreadPoolAsync;
}Outside the handling and before the call to listen(), we add the deinitialization code:
let deinitThreadPool;
fastify.addHook('onClose', () => deinitThreadPool?.());
const address = await fastify.listen({ port: 3000 });And we need to import availableParallelism():
import { availableParallelism } from 'os';We use this function to determine how many threads to create.
Now, let us open sepia.zig and examine what startThreadPool() actually does:
pub fn startThreadPool(count: u32) !void {
try work_queue.init(.{
.allocator = internal_allocator,
.stack_size = 65536,
.n_jobs = count,
});
}work_queue is a struct containing a thread pool and non-blocking queue. It has the following
declaration:
var work_queue: WorkQueue(thread_ns) = .{};The queue stores requests for function invocation and runs them in separate threads. thread_ns
contains public functions that can be used. For this example we only have one:
const thread_ns = struct {
pub fn processSlice(signal: AbortSignal, width: u32, start: u32, count: u32, input: Input, output: Output, params: Parameters) !Output {
var instance = kernel.create(input, output, params);
if (@hasDecl(@TypeOf(instance), "evaluateDependents")) {
instance.evaluateDependents();
}
const end = start + count;
instance.outputCoord[1] = start;
while (instance.outputCoord[1] < end) : (instance.outputCoord[1] += 1) {
instance.outputCoord[0] = 0;
while (instance.outputCoord[0] < width) : (instance.outputCoord[0] += 1) {
instance.evaluatePixel();
if (signal.on()) return error.Aborted;
}
}
return output;
}
};The logic is pretty straight forward. We initialize an instance of the kernel then loop
through all coordinate pairs, running evaluatePixel() for each of them. After each iteration
we check the abort signal to see if termination has been requested.
createOutputAsync() pushes multiple processSlice call requests into the work queue to
process an image in parellel. Let us first look at its arguments:
pub fn createOutputAsync(allocator: Allocator, promise: Promise, signal: AbortSignal, width: u32, height: u32, input: Input, params: Parameters) !void {Allocator, Promise, and
AbortSignal are special parameters that Zigar provides
automatically. On the JavaScript side, the function has only four required arguments. It will also
accept a fifth argument: options, which may contain an alternate allocator, a callback function,
and an abort signal.
The function starts out by allocating memory for the output struct:
var output: Output = undefined;
// allocate memory for output image
const fields = std.meta.fields(Output);
var allocated: usize = 0;
errdefer inline for (fields, 0..) |field, i| {
if (i < allocated) {
allocator.free(@field(output, field.name).data);
}
};
inline for (fields) |field| {
const ImageT = @TypeOf(@field(output, field.name));
const data = try allocator.alloc(ImageT.Pixel, width * height);
@field(output, field.name) = .{
.data = data,
.width = width,
.height = height,
};
allocated += 1;
}Then it divides the image into multiple slices. It divides the given Promise struct as well:
// add work units to queue
const workers: u32 = @intCast(@max(1, work_queue.thread_count));
const scanlines: u32 = height / workers;
const slices: u32 = if (scanlines > 0) workers else 1;
const multipart_promise = try promise.partition(internal_allocator, slices);partition() creates a new promise
that fulfills the original promise when its resolve() method has been called a certain number of
times. It is used as the output argument for work_queue.push():
var slice_num: u32 = 0;
while (slice_num < slices) : (slice_num += 1) {
const start = scanlines * slice_num;
const count = if (slice_num < slices - 1) scanlines else height - (scanlines * slice_num);
try work_queue.push(thread_ns.processSlice, .{ signal, width, start, count, input, output, params }, multipart_promise);
}
}
The first argument to push() is the function to be invoked. The second is a tuple containing
arguments. The third is the output argument. The return value of processSlice(), either the
Output struct or error.Aborted, will be fed to this promise's resolve() method. When the
last slice has been processed, the promise on the JavaScript side becomes fulfilled.
Let us look at one last function: stopThreadPoolAsync:
pub fn stopThreadPoolAsync(promise: zigar.function.Promise(void)) void {
work_queue.deinitAsync(promise);
}Shutdown of the work queue can only happen asynchronously, since blocking the main thread can lead to a deadlock.
Follow the same steps as described in the the hello world example. First change the import statement:
const {
createOutputAsync,
startThreadPool,
stopThreadPoolAsync,
} = await import(`../lib/${filter}.zigar`);Then create node-zigar.config.json:
{
"optimize": "ReleaseSmall",
"modules": {
"lib/sepia.zigar": {
"source": "zig/sepia.zig"
}
},
"targets": [
{ "platform": "linux", "arch": "x64" },
{ "platform": "linux", "arch": "arm64" },
{ "platform": "linux-musl", "arch": "x64" },
{ "platform": "linux-musl", "arch": "arm64" }
]
}And build the libraries:
npx node-zigar buildIf you have Docker installed, run the following command to test the server in a cloud environment:
docker run --rm -v ./:/test -w /test -p 3000 node:alpine npm run start
Zigar 0.14.1 introduced a way of generating a standalone module loader. This frees an app from dependency on node-zigar, allowing it to run on other JavaScript runtimes such as Deno and Bun.
In node-zigar.config.json, add the "loader" field to lib/sepia.zigar:
{
"optimize": "ReleaseSmall",
"modules": {
"lib/sepia.zigar": {
"source": "zig/sepia.zig",
"loader": "src/sepia.js"
}
},
}Rebuild the libraries:
npx node-zigar buildAfterward, sepia.js will appear in src.
In index.js, change the import statement to:
const {
createOutputAsync,
startThreadPool,
stopThreadPoolAsync,
} = await import(`./${filter}.js`);In package.json, remove the --loader=node-zigar --no-warnings flags from the start command:
"scripts": {
"start": "node src/index.js",And move node-zigar from dependencies to devDependencies:
"devDependencies": {
"node-zigar": "^0.14.2"
},
The standalone loader does not rebuild the module automatically upon changes to the code. You have to do it manually.
You can find the complete source code for this example here.
Finally, we have an actual server-side app. And it does something cool! A major advantage of using Zig for a task like image processing is that the same code can be deployed on the browser too. Consult the Vite or Webpack version of this example to learn how to do it.
The image filter employed for this example is very rudimentary. Check out pb2zig's project page to see more advanced code.