Till a few years ago, Instagram, TikTok and YouTube had turned common people into powerful sellers. Brands chased follower numbers, and almost every field – food, fashion, fitness, travel – had a star. Everyone is striving to become a popular street-side social-media influencer dhaba From owners to day laborers – many of whom document their lives. Even doctors jumped into the fray. Their content – posts, newsletters and videos – resonated with audiences. Fees allegedly increased due to overcharging by top creators 22,00,000 ( 2.2 million) per reel. Virtual Influencers (VIs) generated from Artificial Intelligence (AI) are helpful, but only with transparency and co-creation with human creators.
While audiences were still figuring out tools like ChatGPT, brands introduced AI-generated virtual influencers—computer-created characters that look real, post daily and promote products. This trend began in Japan and quickly spread to the US, China and South Korea. Domestic versions are now available in India with genuine brand deals and increasing reach. This matters because virtual personalities change the way we create and sell influence: they can post endlessly, never get old and never leave a brief. That power also brings new questions to the forefront – about authenticity, disclosure, and impact on young audiences.
Additionally, VI can cut costs, expand content into multiple languages, and give brands tighter control. For example, a post from Mia Zelu, a completely AI-generated influencer with over 150,000 Instagram followers, reportedly costs around ₹1,49,135 ($1,694), allowing even small brands to purchase studio-ready content on a large scale. Because teams script everything, avatars don’t go off message; They’re always available and can create content on demand—no travel, shooting, or schedules. They’re also cost-effective and trend-driven, amenable to stylized aesthetics and augmented reality (AR)/virtual reality (VR) tie-ins – benefits that explain the rush to adoption before asking more complex questions about trust.
Yet those forces bring risks. VIs can blur authenticity, hide paid promotions and shape unhealthy ideals – especially for vulnerable groups. Because they are non-human, many followers consider them less trustworthy than human creators. As a result, if a VI “misbehaves,” companies and endorsed brands may be blamed – and even held liable. A recent flashpoint: Gucci’s use of Korea-made virtual endorser OH_ROZY drew heated comments on Instagram that it felt insincere and off-brand — a reminder that authenticity is fragile. Research also shows that when AI influencers become too human-like, consumers dislike them – the “Uncanny Valley” effect. And because they have no physical presence, virtual influencers intensify questions of authenticity, which is a key currency in the influencer industry.
Nevertheless, proponents argue that brands are adopting VI to increase sales and engagement, including for utilitarian products, which is beneficial for smaller brands without celebrity budgets. They also note that VIs can attract strong engagement (likes and comments) and avoid human-prone risks; When the Tiger Woods scandal broke, Accenture severed ties overnight. However, because influence depends on trust, we need clear disclosures: prominent in-frame and in-caption labels that state that the influencer is AI-generated and that the post is a paid partnership. The platform must fix or remove flagged posts, age-gate minors and set realism rules within one to three days. With these guardrails, brands can use VIs without reducing trust.
Use VI—but only within these basic rules. Align campaigns with Gen Z’s values and digital habits; When done well, VIs can deepen engagement and shed light on social issues. Design avatars to reflect true physical diversity and embrace imperfections – for example, Brain Grams, a plus-size Black model, was created as the opposite of Shudu’s idealized perfection. Clearly mark that the influencer is non-human; Mikaela tells in her post that she is an AI robot. Protect children: Exclude children under 13 from targeting and placement. Maintain an audit trail of scripts, prompts, assets, and approvals with human sign-off before posting. Measure more than reach—track brand-lift, trust, and complaint rates. Review the results after a time-limited pilot (for example, 12 weeks) and enforce a clear go/no-go rule.
Summarize before the next reel comes up: say it’s AI, paid, and who runs it. Associate each avatar with a real creator. Keep children out. Fix bad posts quickly—within three days. Do this, and you will keep the magic and the faith. Skip it and expect boycotts, fines, and followers scrolling further. As India meets its artificial stars, let honesty be our motto.
This article is written by Arpita Srivastava, Associate Professor, XLRI-Xavier School of Management, Jamshedpur.


