Initially Group Portrait Maker was produced by TrinaStudios Inc. Kathy’s Trina characters were customized in Photoshop for sale as gifts. Early versions focused on honoring professions and talent.


Even though each character was assembled from a stock image and edited for hair color and style, the time needed for each image was a huge bottleneck. Some automated version was needed. A Flash app was explored then abandoned due to the complexities of combinations and permutations.

The first version of GPM provided very useful information about the product that our market would buy. The most important attributes of the characters were hair color, hair style, skin color and most significantly, fashion. Faces were not expected to be precise since the product was a drawing. Our customers were happy to pay for these artistic representations of their family, friends, clubs, teams, etc. Due to the slow production time for each image, Group Portrait Maker was postponed for more than a decade. The development of convolutional neural networks and content/style image pipelines changed everything.

In 2019 “WarpGAN: Automatic Caricature Generation” was published by Yichun Shi Debayan Deb Anil K. Jain at Michigan State University and Group Portrait Maker was revived. Over the past few years the development of GPM has oscillated between a process that would produce a 3D model of caricatures and 2D images generated by latent diffusion models.


Efforts to develop a 3D version of warpGAN that produced a caricaturized 3D head and body directly from a photo were not successful. So, a 3D first approach followed by artistic style processing was tried. And, a sample Unreal Engine app fragment was developed.

Meanwhile, Grove OZ Biz was awarded a developer partnership agreement with Browzwear. The fashion component of the image pipeline had a very sophisticated generator as part of the portrait production. Any garment for any size person could be assigned to the actors. Ethnicity, body type, height, weight, skin color, hair style, head shape, etc. could now be automatically generated. With the AI first approach, these generative scripts might be the process by which a new data source is generated for a transfer model as a companion to DALLE, MidJourney, Stability AI, etc.

More recently ( September 2022), DALL-E2 and Mid Journey were the basis for a significant pivot in the development tactics for GPM.


These DALLE and MidJourney generated images demonstrate that the diffusion model approach seems to be the best approach for developing and implementing Group Portrait Maker. Users would be able to assign their own and friends’ faces to the actors and assign any garment in any style located in any place in any weather condition. Future features might include speech to text to image to model to scripted animation with audio.