-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Glow compiler makes matrix math fast, by caring about cache locality: https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0
We could use compiled glow code as gumath kernels. they can compile aot: https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a lot like our story with numba).
To create a glow network you either have to write C++ or compile from an onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.md https://github.com/pytorch/glow/blob/master/docs/IR.md#the-lifetime-of-a-glow-instruction
Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?
If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.