Onix, a new AI startup, is building what it describes as a "Substack of bots" — a platform where health and wellness influencers deploy AI digital twins of themselves to dispense advice to paying subscribers 24 hours a day, according to reporting by Wired.
The concept arrives at a moment when creator monetisation is maturing and AI companionship platforms are attracting serious venture interest. By grafting subscription economics onto conversational AI, Onix is betting that fans of nutritionists, therapists, and wellness coaches will pay for on-demand access to a bot trained on their favourite expert's voice, knowledge, and, crucially, personality.
How the Platform Actually Works
Onix's model is straightforward in structure: a human expert — a therapist, a dietitian, a fitness influencer — licenses their identity and expertise to the platform. Onix uses that material to train a personalised AI chatbot. Followers then pay a subscription fee to interact with the bot at any hour, asking questions they might otherwise save for an expensive or hard-to-book appointment.
The company frames this as democratising access to expert knowledge. A subscriber in a different time zone, or one who simply cannot afford hourly professional rates, could theoretically get personalised-feeling guidance from a bot modelled on someone they already trust.
The line between democratising expertise and monetising the appearance of it is thin — and Onix sits directly on top of it.
What the platform also enables, according to Wired, is product promotion. The bots can recommend the expert's own merchandise, supplements, or courses — embedding a commercial layer inside what a user might experience as a neutral health consultation.
The Accountability Gap in AI Health Advice
The health and wellness vertical is not an arbitrary choice — it is one of the highest-engagement categories in the creator economy. But it also carries significant risks. Medical and therapeutic advice, even when hedged, can cause real harm if it is wrong, outdated, or tailored by an AI that cannot adequately assess an individual's full circumstances.
Regulatory frameworks for this kind of service remain underdeveloped. In most jurisdictions, a licensed therapist or physician faces strict rules about what they can advise and how. Whether those rules extend to an AI trained on their likeness and operating under their brand name is largely untested legal territory. Onix's model could move faster than regulators are prepared to respond.
There is also the question of informed consent on the user side. Subscribers interacting with a bot that sounds, writes, and reasons like a trusted expert may not always maintain a clear mental distinction between the human and the AI — particularly in emotionally sensitive areas like mental health support.
Creator Economics Meets Conversational AI
Substack succeeded by giving writers a direct financial relationship with their audience, cutting out traditional media intermediaries. Onix is applying the same logic to knowledge workers in health and wellness, arguing that the intermediary being cut is the calendar — the bottleneck of booking, availability, and geography.
For creators, the proposition is financially attractive. A single human expert can only hold so many sessions per day. An AI twin has no such ceiling. If even a fraction of a large following converts to paid subscribers, the revenue potential scales in a way that one-to-one professional practice simply cannot.
The risk for creators, however, is reputational. If an AI operating under their name gives advice that harms someone, the backlash attaches to the human brand — not to a neutral technology company. How Onix structures liability and disclosure will matter enormously, both to creators considering the platform and to the regulators who will eventually scrutinise it.
What the Market Tells Us
Onix is not alone in spotting this opportunity. Platforms like Character.AI have demonstrated that users will engage deeply — sometimes disturbingly so — with AI personas. Meta has experimented with celebrity AI personas across its apps. What distinguishes Onix's approach is the explicit monetisation of professional credibility in domains where bad advice carries measurable risk.
No funding figures for Onix were available at the time of publication, which means the company's runway and the seriousness of its institutional backing remain unclear. That opacity is itself a data point: the platform is early-stage, and the gap between a compelling pitch and a regulated, liability-tested product is significant.
The broader creator AI economy is moving quickly regardless. Tools that let influencers clone their voice, writing style, or on-screen presence are proliferating. Onix's contribution is to add a transactional subscription layer and aim it specifically at health — the sector where trust is most valuable and most easily exploited.
What This Means
For consumers, Onix represents a new category of risk: paying for the comfort of expert familiarity while receiving advice from a system that carries none of the professional accountability the real expert is legally bound by. For regulators and platform policymakers, it is an early signal that AI monetisation is moving into sensitive health territory faster than oversight frameworks are equipped to follow.
