The current SDK offers the possibility to set up custom hand poses. Our current data flow is to get the pose from the oculus avatar sdk, convert it, and apply it to our (previously converted) mesh. For custom poses we are wondering if we need to feed them to the oculus avatar sdk at all, or we could just apply them to the converted hand mesh directly ourselves.
What is benefit of feeding the custom pose to the oculus sdk? Will it do blending automatically? Something else we are missing?