Burning Tag

CASE STUDIES: THQ WIRELESS BULK IMAGE PRODUCTION

In the summer of 2004, THQ Wireless was facing tight deadlines to create a series of NHL player mobile wallpapers. A small army of contractors was hired to brute-force through the project, as it had always been done. In short order it became obvious that there had to be a better way.

 

Before digging into this case study, it’s worth setting the stage to understand the marketplace. In 2004, the first major point in time where mobile content could be consumed with good quality, the iPhone was still 3 years off. Android didn’t exist, and even the once-ubiquitous Motorla RAZR had not even been released outside of the company in prototype form. The marketplace was highly fragmented, but roughly divided between Motorola, Nokia, Samsung, Sony Ericsson, and a few other major players. At this point in time, a 176×220 display was considered massive and a color display was something of luxury.

 

THE PROBLEM: MANUAL PRODUCTION DOESN’T SCALE TO INDUSTRIAL LEVELS.

In this era, no handset had achieved “must-have” status, and carrier-exclusive variants of handsets were very common. While handset manufacturers attempted to find the right combination of features and functionality, they eschewed any sort of standardization on displays. The result for content providers was an ever-growing nightmare: an increasing list of screen resolutions that sometimes would differ by only one pixel in a single dimension. Unfortunately for content providers at the time, there was no option to repurpose the larger image for the smaller, so two completely separate files would need to be created.

 

Additionally, for each image created, a watermarked version (with “SAMPLE”) had to be created. Furthermore, the filetypes necessary to complete a task were a large, sometimes-overlapping array of GIF, JPG, PNG, BMP and TIFF. The only possible way to be sure that absolutely everything had been covered was to do every possible combination. This meant for every possible screen resolution, you needed to create 10 images (five formats, watermarked & not watermarked), per unique image. Needless to say, this resulted in a massive amount of files and a huge potential for error in naming and saving.

 

 

At the time, the production scale was an afterthought – the primary goal was to simply find a way to output the necessary files correctly, with no errors in nomenclature or missed filetypes. Fortunately, Adobe had recently released Photoshop CS, which introduced a basic scripting language in addition to  Photoshop’s already existing support for basic scripting via Actions and Droplets. Additionally, the new version introduced a key concept, “Layer Comps”, which allowed storing pre-defined layer visibility settings.

 

The first task was to automate saving the current image state to the five basic formats. That was easily accomplished after experimentation and understanding some of the basic inconsistencies in the scripting language. After this, the script was extended to cycle through all layer comps on the file and save them out to the five image formats. Mandatory copyright notices and the “SAMPLE” watermark were added as auxiliary scripts that would process the output of the layer comp files.

 

With this basic workflow in hand, the error-prone output section had been automated, effectively eliminating errors on output files. This advance alone shifted the balance of the time upstream to higher-value tasks, namely arranging artwork into interesting compositions and spending more time on design.

 

NEXT STEPS: REDUCING REDUNDANCY.

After a short while, it became obvious that there was still a great deal of work duplication, and the production phase of the master files still consumed more time than it ought to. Many files were of identical aspect ratio, just differing by raw pixel dimensions, or were sufficiently near to other ratios (e.g. 1.78:1 and 1.77:1). However, simply using batch scaling and canvas-size commands would likely clip or distort images, resulting in poor product that would certainly be rejected.

 

By this time, THQ was producing content for over 100 distinct screen sizes, with an average of 80 images per property – meaning 8000 unique images that needed to be generated. It was possible, but it was still time consuming and costly. The next step was to leverage aspect ratios and “safe areas” to minimize the work. By establishing safe areas of about 5-10 pixels from the margin, depending on the ratio, designers could avoid placing key parts of art in problematic areas. Two types of files became key workflow components:

  • Aspect Masters were pre-identified key aspect ratios. Files tended to cluster around regular aspect ratios, and by choosing the median ratio, only minor tweaks were required when resizing to near-neighbor aspect ratios (again, 1.77:1 and 1.78:1 were not fundamentally different in composition).
  • Derived Masters were files created from the Aspect Masters. After completing a run of Aspect Masters, they would be plugged into a script that would perform canvas expansions/contractions and scaling operations. This would generate the 100+ derived master files. Since safe areas had been used to compose the Aspect Masters, Derived Masters frequently needed just an eyeball check and occasional nudge of an image.

 

The workflow was simple: Create one large master file, and process it to generate approximately 20 Aspect Masters. Composition and layouts were tweaked on the Aspect Masters, and then they were saved. Aspect Masters were fed into a script which then created 100+ Derived Masters. The Derived Masters were checked, tweaked when necessary, and then run through the Layer Comp output script. Finally, the files were stamped with copyright notices and then watermarked.

 

 

At this point, designers were essentially creating 1600 files and running spot-checks on the remaining 6400 files, which was a quick operation (usually at the rate of about one image every 10 seconds. The team of a handful of designers were able to process 8000 images in a little over 12 hours. Due to the efficiency of utilizing aspect ratios, the output had the result of each image being hand-created.

 

FINAL OPTIMIZATON: DO ONLY WHAT YOU MUST.

By now, the system was operating at peak throughput, resulting in millions of images being created in weeks. Vector image processes were even more efficient than raster images and depended solely on the raw throughput of the computers used, with perfect fidelity to the original source and no need for manual intervention. However, there was no escaping the fact that production exceeded demand, and that THQ could essentially build images “to order” instead of producing everything possible.

 

The final optimization, then, was to figure out a way to automatically generate the minimal set of Aspect Masters to serve the final target set of image dimensions (what would be the Derived Masters). A bottom-up process was used, and the script generation was moved to a small web service that could generate all scripts.  The web service would get a list of all target resolutions (part of the original work order). It would then calculate aspect ratios of each target size and group them by identical aspect ratios. Then, it would look for any cluster of aspect ratio (1.76:1, 1.77:1, 1.78:1) and determine the optimum center-points for each Aspect Master. The spacing of clusters could be configured as well (e.g., the above example could have conceivably covered 1.71:1 through 1.83:1 by tweaking a variable, though in reality this would have been too wide of a cluster).

 

Custom-created scripts would then be created, with the minimal set of Aspect Masters serving the final Derived Master set. The Derived Masters would be exactly what was requested and no more, with subsequent orders being treated as new projects (though all stemming from the original Master file).

 

Having stress-tested the system previously, this optimization was largely a QA exercise, and was proven functional within a few short days.

 

CONCLUSION

When THQ first started this project, it was able to only offer top names in each of its properties, due to time constraints.If the production had scaled up as it ultimately did, without corresponding automation, THQ would have regularly had to manually create and name 80,000 files per property.  Instead, production was reduced to low thousands – about 2% of the files created needed to have actual designer hands-on time to result in high-quality output. Every single source file before the final derivations had a designer look at and adjust it.

 

The increase in automation allowed THQ to spend on hardware to parallelize operations and reduce computer time, changing long serial runs to short parallel runs. THQ was able to reduce its full-time and contract staff for graphics production and was also able to reduce the amount of offshore labor it used, while not sacrificing any quality or turnaround time.

 

At its peak the system was capable of producing 1.2 million final images in a week on specialized runs, with more standard throughput coming in at around 150,000 final images at its week. These numbers reduced eventually after the system was tweaked to allow production batches that were only what was needed.