Texturing & LookDev
- Mikey Owen
- Nov 4, 2022
- 5 min read
Updated: Nov 15, 2022
Now that we've established our methods for clean up and retopology, there's only a few steps remaining until the completion of this assignment:
Reapplying the photogrammetry textures lost during retopology.
Creating a look development showcase video for the final meshes.
I will be covering both of these in this blog post, so we'll begin with reapplying the textures!
Reapplying the Photogrammetry Textures
An unfortunate byproduct of retopologising the meshes is that the textures created by 3DF Zephyr from my captures were lost. This is because they're applied to the mesh via UVs (a 2D map of the mesh), which are based on polygons and layout of the geometry... Which of course we changed during retopology.
I'll admit, this was absolutely the hardest and most time consuming aspect of this assignment so far. I've explored what feels like hundreds of different methods, using a mixture of my own understanding of texture mapping and tutorials online. But nothing seemed to work.
I was willing to completely admit defeat, but then recalled a facility I'd noted from ZBrush (during my retopology experiments) that might work. called multi map exporter. This is a plugin that essentially throws out all the maps you require for texturing; normal, diffuse, ambient occlusion etc. Into image files that you can utilize in external texturing software.
I then established that if I could project the maps from the original scan onto the optimized mesh in Zbrush, I could then export the maps that this projection created to apply them to the retopologised mesh!
This is a fairly lengthy workflow, and it's something that I know from conversations with my course mates that a lot of us have struggled with. So I'm going to try and break it down as much has I can below:
Step 1: Create UVs for the optimised mesh.
The newly retopologised mesh will not have any UVs currently, which of course are crucial for texturing. Therefore the first step is to create some in whatever method you prefer. For this, I chose Autodesk Maya, as you can easily choose edgeloops to act as 'seems' for the map to build them from. Or if you're feeling particularly lazy like I am for this breakdown after hours spent finding a solution, you can even automatically generate these from the UV editor:

Once you have these in place, however you choose to do them, export the model and open ZBrush.
Step 2: Bring the two meshes into ZBrush
With ZBrush open, import the newly UVed, optimised mesh in as new subtool. Additionally, bring in the original mesh as a separate subtool, and append it to the optimised one so that both can be accessed from the same locale.
From here, increase the subdivisions on your new mesh until the poly count is actually higher than that of the original scan. Then select the original scan, click texture on the left hand bar and import the image plane created by the photogrammetry software (this will always be produced regardless of what software was used - Zephyr thankfully generates this all into a single image, whereas I know ReCap does it as several individual ones. In this case multiple images will need imported and all the individual pieces of the original mesh will need to be also.
NOTE: The image map/s will also need to be flipped vertically in photo editing software before being imported due to ZBrush's use of different axis to conventional software.
You will need to then enable the MRGB channel on the top bar, and go to the polypaint drop down menu. Click 'polypaint from texture', and your original scan should now show the photogrammetry textures there within ZBrush!

Now the textured scan is in there, we can create the maps!
Step 3: Project onto optimised mesh and create maps -
Now only a few steps remain before we have our textures! First you need to subdivide the optimised mesh until it's polycount actually exceeds that of the original scan.
After this, you need to project the original scan onto the optimised mesh. You'll notice the geometry shift until it essentially matches the original scan note for note. It will also ask if you wish to transfer over the polypainting, click yes.
Once this has done, use the multimap exporter to export all the maps that you need: diffuse, ambient occlusion and normal being the prime three in my opinion.

Now that we have the exported maps that are compatible with the optimised mesh, all we have to do is apply them!
Step 4: Apply the maps in texturing software of your choice -
Finally we now have some maps that can be used with the optimised mesh, so all that remains is applying them to it!
For this step I chose to use Substance Painter as it's my go to texturing application. So I opened a new scene there and imported the optimised mesh (as it was with it's UVs prior to being brought into ZBrush), as well as the newly created the maps.
I then designed the maps as 'textures' and imported them to the current session. From here it's simply a case of going to the texture set settings and applying them under their respective headings. Then ending with the colour itself, which you add as a fill to the actual texture 'layer' on Substance.

Voila! Original, high quality photogrammetry textures applied to a clean topology optimised mesh!
Creating My Look Development Video
Once I'd finished applying my methods for clean up and retopology to the remaining 4 meshes, it was time to get them all into Unreal Engine to create a look development video.
This was fairly straight forward to produce, (especially as I'm lucky enough to have an Unreal Engine project file that I used last year with a mock-up photography studio already set up!) only requiring me to import the models, maps and quickly generate a wireframe material (easily done thanks to this helpful guide I found here).
Once these were all in place; I simply keyframed a turntable animation for each of the meshes, and rendered it twice, once with the wireframe material and once with the texture. From there I took the image sequences into my editing software of choice, and compiled them together with some subtitles and slideshows of the image capturing sessions (for context to the models).
The final look development video can be viewed below:
Signing Off
With the look development video complete, I've now finished my first Data Capture assignment as well as my first assignment for my degree year!
I decided to round this coursework off on a good note with some high quality renders of the final meshes below. I'm really looking forward to using these for my temple assignment (more on this in a future post), and am really proud of the results I've achieved.





It's been a long road, but I can now move forward in my CGI education having built up a great knowledge of photogrammetry, geometry clean up and retopology workflows which will no doubt prove useful for wherever my journey takes me next!



Comments