Looking for testers: ONNX execution on CPU/GPU + ONNX→TOSA export (no model rewrite required)

Hi everyone,

I am currently experimenting with a runtime/compiler MVP that can:
(1) execute ONNX models directly on CPU and GPU without model rewrites, and
(2) export ONNX models to TOSA without manual rewriting.

The idea is to make ONNX act as a stable frontend while targeting multiple backends (CPU/TOSA/GPU) with the same model graph. The next milestone will be to export ONNX-Torch.

I am looking for people interested in testing the MVP and providing technical feedback (compatibility, edge cases, performance, ONNX ops coverage, TOSA export, etc.).

This is a closed binary preview (pip wheel), anyone is welcome to try it. Just send me a message and I will provide the wheel and instructions.

Requirements:
- Python 3.x
- ONNX model(s) to test

Thanks in advance to anyone willing to push the system a bit.