January 20, 2025
Insight
Virtual Try-On Rebirth: An Overview
Written by Ricardo Sousa, Ph.D.
SeeOnMe, Chief Artificial Intelligence & Co-Founder
Introduction
Virtual Try-On, also commonly known by its acronym VTON, presents its first documented work around 2000. By enabling customers to visualize clothing, footwear, and accessories without physical trying them, this technology has expanded the possibilities for addressing this long-standing customer problem which dates back well before 2000 [1].
To give a bit more of context, we need to go back to the first industrial revolution. This milestone in the human evolution brought automation to the industry, increasing production from which fashion market also benefited from by making clothes more affordable through lower manufacturing costs and volume increase. At the same time, interest in tailored clothing grew. This created a need for people with similar body types to serve as fitting models, leading to the emergence of mannequins and later, the models (see Elle article for a more comprehensive overview of the fashion model evolution or its ontology). This marks the beginning of fashion democratization.
The ability to visualize clothing on different body types is central to this revolution in fashion democratization, a transformation that continues to evolve today. While retail and e-commerce have introduced new ways to showcase fashion, there has historically been a disconnect between the idealized bodies used in fashion presentation and real consumers' body shapes.
Frédéric Cordier and collaborators tap this need clearly in their seminal work in [3], laying the groundwork for numerous advances that followed.
Virtual Try-On Roots
Interestingly enough, Virtual Try-On has its motivations in the Virtual Reality [2] domain, particularly in modeling clothing. This technology enabled fashion designers and manufacturers to simulate garments, experiment with different fabrics, and visualize products on various poses and body movements before initiating production. While early applications focused virtual world representation over rendering fidelity, a surge of interest in favor of quality generation has emerged. [1]
Shortly thereafter, a substantial number of solutions emerged (see Figure below), yet few achieved compelling, customer-ready, results. This is because the key challenges in virtual try-on technology encompass:
Diverse fashion model poses that complicate accurate garment rendering;
Variable fabric behavior across different poses and body shapes;
Lighting variations and photo conditions that can produce artificial-looking results;
Limited access to precise product and body measurements while maintaining user privacy.
Additionally, fashion product catalogs introduce complexity through their inherent diversity. The combination of these challenges, along with customer demands for high-fidelity garment visualization, creates significant technical barriers in meeting market expectations.
Nonetheless, developments in the field kept its stride. The number of publications focused mostly on VTON has sky-rocketed in the last decade (approx. 1000% increase since 2014). Taking into account the challenges identified above, efforts were put into collecting user measurements for a personalized avatar creation [7] or to preserve clothes markers with high fidelity in the generated image [8]. Addressing these challenges early on continues to pave to way to more realistic generations.

In recent years, new paradigms have emerged due to two key factors: 1) access to vast amounts of data, and 2) increased computational power, which has enabled the revival of VTON technology.
Virtual Try-On Rebirth
The recent breakthroughs in Generative AI have contributed significantly to the rebirth of Virtual Try-On technology (VTON). Google Trends analysis since 2008 reveals a clear pattern: an upward trajectory beginning in 2018, followed by notable peaks in early 2020 and a surge in 2024. These milestones coincide with key breakthroughs leveraging advances in Generative Adversarial Networks (GANs), U-Net, and Transformer models [4, 5].

Virtual Try-On Search Interest since 2008.
Attributing this renewed interest solely to a few key innovations oversimplifies this transformative period. These technological advances have produced measurable impacts on productivity and efficiency across multiple industries, extending well beyond perceptual improvements. The fashion industry exemplifies this transformation.
Naturally, certain fundamental concerns remain. Questions about training data sources, storage methods, and customer data privacy continue to affect business decisions in this space. But rather than hindering progress, this complexity has only accelerated innovation, spurring the creation of over 5,000 startups in the United States since 2013, and opening new avenues for business development [6].
To keep up this momentum due to a widespread global interest, major retailers such as Walmart, luxury brands like Zelig, and tech giants including Google and Amazon have been leading the implementation of practical VTON solutions. The paradigm shift is further evidenced by strategic decisions like Amazon's replacement of their "Prime Try Before You Buy" service with AI-powered virtual try-on tools.
Concluding Remarks
Despite its early adoption by several retailers, virtual try-on technology has seen limited real-world success due to technical limitations, lack of “killer applications” addressing user needs, and poor user experience. Of particular concern is the low visual fidelity, often resulting in unconvincing results that discourage user adoption.
As we outlined in this article, recent technological advancements in Generative AI, have renewed optimism in virtual try-on solutions. These developments are validated by new applications that can generate garments on target customers’ images with significantly higher fidelity than previous technological solutions. This advance is leading to enhanced virtual try-on solutions, spawning new business models that serve brands and retailers both directly and indirectly.
All of this leaves us wondering: as virtual try-on blurs the line between our physical and digital selves, are we trying on clothes — or, simulating identities? If AI can make anyone "wear" anything, does fashion lose its edge as a cultural signal? [9, 10]
Further Reading
Sources used to support the content in this article.
Volino, P. Virtual Clothing: Theory and Practice. Springer Berlin, 2000.
Alcañiz, Mariano, Enrique Bigné, and Jaime Guixeres. "Virtual reality in marketing: a framework, review, and research agenda." Frontiers in psychology 10 (2019): 1530.
Cordier, Frédéric, et al. "From 2D photos of yourself to virtual try-on dress on the web." People and Computers XV—Interaction without Frontiers: Joint Proceedings of HCI 2001 and IHM 2001. Springer London, 2001.
Yang, Han, et al. "Towards photo-realistic virtual try-on by adaptively generating-preserving image content." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Vaswani, A. "Attention is all you need." Advances in Neural Information Processing Systems (2017).
https://www.visualcapitalist.com/mapped-the-number-of-ai-startups-by-country/
Tong, Jing, et al. "Scanning 3d full human bodies using kinects." IEEE transactions on visualization and computer graphics 18.4 (2012): 643-650.
Wang, Bochao, et al. "Toward characteristic-preserving image-based virtual try-on network." Proceedings of the European conference on computer vision (ECCV). 2018.
https://www.rockandart.org/the-role-of-fashion-in-cultural-identity/
https://spyscape.com/article/espionage-and-the-psychology-of-fashion