MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices that equips with MediaTek Neuron Processing Unit (NPU). This document offers a step-by-step guide to set up the build environment for the MediaTek ExecuTorch libraries.
::::{grid} 2 :::{grid-item-card} What you will learn in this tutorial: :class-card: card-prerequisites
Follow the steps below to setup your build environment:
Setup ExecuTorch Environment: Refer to the Setting up ExecuTorch guide for detailed instructions on setting up the ExecuTorch environment.
Setup MediaTek Backend Environment
backends/mediatek/ directorypip3 install -r requirements.txt
pip3 install mtk_neuron-8.2.13-py3-none-linux_x86_64.whl pip3 install mtk_converter-8.9.1+public-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
export NEURON_BUFFER_ALLOCATOR_LIB=<path_to_buffer_allocator.so>
Exporting a PyTorch Model for MediaTek Backend:
.pte file for on-device execution. The export script samples are povided under example/mediatek/. For example, the following commnad exports the .pte using the scripts provided.cd executorch ./examples/mediatek/shell_scripts/export_oss.sh mobilenetv3
.pte files under the directory named as same as the model.Build MediaTek Backend for ExecuTorch Runtime
Navigate to backends/mediatek/scripts/ directory.
Build MediaTek Backend: Once the prerequisites are in place, run the mtk_build.sh script to start the build process:
./mtk_build.sh
MediaTek backend will be built under cmake-android-out/backends/ as libneuron_backend.so.
Build a runner to execute the model on the device:
./mtk_build_examples.sh
cmake-android-out/examples/Push MediaTek universal SDK and MediaTek backend to the device: push libneuronusdk_adapter.mtk.so and libneuron_backend.so to the phone and export it to the $LD_LIBRARY_PATH environment variable before executing ExecuTorch with MediaTek backend.
export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH