Making Interoperable AI Models a Reality

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on whatsapp
Share on telegram
Share on pocket

In one of the last meetup events of the year, one of the graduates of the AI Apprenticeship Programme (AIAP™), Lee Yu Xuan, was invited to be one of the speakers at Google Singapore by TensorFlow and Deep Learning Singapore. This is one meetup group that consistently delivers cutting edge stuff, especially the latest in TensorFlow development, for the hardcore technical geek and I always try not to miss it. Organiser Martin Andrews had wanted to find a way to convert between PyTorch and TensorFlow deep learning models and came across Yu Xuan’s article which he had written earlier this year. Since Yu Xuan had already done prior work in this area with the ONNX open format, Andrew invited him to share his experience using it, which he gladly accepted.

What and Why ONNX

ONNX is an acronym for Open Neural Network Exchange Format. As the name suggests, it provides interoperability between different deep learning frameworks. Despite the demise of Theano in 2017, it has become clear that AI research and production will remain a multi-polar ecosystem in which TensorFlow and PyTorch are currently the most popular frameworks. While the good thing is that they are both open source, they are not interoperable. The ability to share models or to move training and inference between frameworks are some of the goals that ONNX seeks to fulfill.

Our goal is to make it possible for developers to use the right combinations of tools for their project. We want everyone to be able to take AI from research to reality as quickly as possible without artificial friction from toolchains.

– From the ONNX webpage

ONNX started in 2017 as a community project by Facebook and Microsoft. It has also received support from notable names, including Intel, AWS, Huawei and NVIDIA, among others. Last month, the LF AI Foundation welcomed it as a graduate level project. ONNX also supports a collection of pre-trained SOTA models contributed by the community.

While much has been accomplished, perhaps the biggest stride forward would be getting Google on board. To date, the Mountain View giant has been absent from the community and this has been felt. For one thing, TensorFlow 2.0, released at the end of September this year, is still not being supported by ONNX. Clearly, this is what the industry needs and the community is no doubt working hard to fill the gap. In time to come, we hope that friction from switching between frameworks will be a thing of the past.

Leave a Comment

Previous

Analysis of Tweets on the Hong Kong Protest Movement 2019 with Python

Digital Transformation

Next