This is an example of using models trained with TensorFlow or ONNX in Unity application for image classification and object detection. It uses Barracuda inference engine - please note that Barracuda is still in development preview and changes frequently.
More details in my blogpost.
If you’re looking for similar example but using TensorflowSharp plugin instead of Barracuda, see my previous repo.
You’ll need Unity 2019.3 or above. Versions 2019.2.x seem to have a bug with WebCamTexture and Vulkan that causes memory leak.
- Open the project in Unity.
- Install Barracuda 0.4.0-preview plugin from
Window -> Package Maanger(the sample didn’t work with 0.5.0-preview last time I checked it).
Edit -> Player Settings -> Other settingsmake sure that you have Vulkan in Graphics APIs for Android or Metal for iOS (remove Auto Graphics API check if neccessary). Barracuda is also suppose to work with OpenGLES3 + ES3.1, but I didn’t have any luck with it.
- Open Classify or Detect scene in Assets folder.
- Make sure that Classifier or Detector object has Model file and Labels file set.
File -> Build settingschoose one of the scenes and hit
Build and run.
For iOS, you might need to fix team settings and privacy request message for camera in Xcode.
Barracuda repository might be found here.
How to use your own model
There are limited range of neural network architectures that I managed to get to run with Barracuda. Read my blogpost to see what’s working and what isn’t.
I’m not a Unity expert, so if you found any problems with this example feel free to open an issue.