Skip to content

Thoughts/discussion: Interop with glow compiler #20

@saulshanabrook

Description

@saulshanabrook

Glow compiler makes matrix math fast, by caring about cache locality: https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0

We could use compiled glow code as gumath kernels. they can compile aot: https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a lot like our story with numba).

To create a glow network you either have to write C++ or compile from an onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.md https://github.com/pytorch/glow/blob/master/docs/IR.md#the-lifetime-of-a-glow-instruction

Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?

If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions