Headshot

It’s never a good sign when you find yourself in the digital backwater that is the forth page of the Google search results. Pushing past ever more tangential links, you begin noticing the same Stack Overflow content endlessly recycled on increasingly dodgy sites, causing you to recall the handful of lost soles you encountered back in the hazy innocence of page one.

I found myself in this very predicament recently while searching for something that I thought should be simple: how to capture texture data of a user’s face using ARKit. But alas! Various combination of keywords turned up nothing more than a few false GitHub leads and vague hints from the perennial DenverCoder9s of the world.

And although I never found the result I was looking for, in hopes of helping all of my fellow searchers out there, I ended up creating Headshot: a simple iOS app that demonstrates how to easily capture the texture of a user’s face in realtime with ARKit.

Links

The demo app applies the generated texture back to a 3D face model, creating a fully textured face model that can be rotated and moved independently of the current camera view. Here’s what HeadShot looks like in action:

The top left corner shows the textured 3D face model created in realtime from the front facing camera data

Almost all the computation is done using a Metal render pipeline, so it is suitable for use in a 60fps game/app loop.

As with Reality Shaders, the implementation of HeadShot is actually surprisingly simple, but it took me a while to figure out just how to accomplish it. Here’s an overview:

  1. Use ARKit to track the users face and generate a rough geometric model of it.
  2. Render the face model so that it matches the position of the user’s actual face in the scene.
  3. For each texture point on the face model, figure out where to sample from in the captured frame image.

Steps 1 and 2 are all done for you by ARKit, so it’s step 3 that’s the interesting bit. The key insight I had is that you can render the face in texture space while continuing to use the actual vertex positions for sampling from the captured frame image. This lets you generate a texture map that can be applied back to the original face geometry:

The 2D face texture map.<br>The second dimension is not my most flattering angle.

The 2D face texture map.
The second dimension is not my most flattering angle.

Here are the relevant parts of the implementation:

The approach isn’t perfect mind, and I’ve put together some notes about its limitations as well as some thoughts on how you can potentially work around them. My hope is that the demo provides a good starting point to build on.

If you end up creating something fun using this technique, let me know! I’m also planning to explore some more face base AR experiences soon, so stay tuned!