Sign up to our e-newsletter for industry updates, resources, advice, freebies and more!
I can leap over to any giant blank brain OR I can start looking for AI providers who will do more for me. Turn those endless text threads into actionable projects and enterprises.
So, unless an AI provider does help me more than just being a huge sycophantic vat of intellect, masquerading as a loyal friend, with average to terrible filing, admin and management skills, I’ll export my entire account and upload it elsewhere. Somewhere that will add more value and help me succeed.
With that in mind, it is inevitable that entities like OpenAI will have to try being more of a business assistance and economic value generator for it’s users, lest those users run to Claude or CoPilot who may offer a smoother path to rationalising all our interactions and reaping economic benefits.
Thus all the AI giants (who have had sovereign nation GDP levels of capital invested in them) are faced with the same dilemma: Customers saying “Help me thrive financially or I’m leaving”
This could unfold in two ways:
1. AI giants stay as giant blank brains and rely on an ecosystem of developers and independent partner businesses do do all the downstream revenue generation work and customer service, building apps and services that they can onsell to their own customers (think Google Partners who help businesses manage their GoogleAds).
The only problem with this, from the AI company and their investors’ perspective, is that it creates a large middle-man sector that they don’t have a stake in. That is great for the overall economy and revenue distribution/flow, but that is a huge chunk of money the AI investors could be eyeing off – especially if they start to get desperate.
So, they will likely resort to the same old options most tech giants resort to: Ramping up subscription costs, ramping up usage costs for developers, adding on more and more subscription services and selling customer eyeballs to advertisers. Then making their systems more and more complex for developers, to force developer lock in (“you know so much about our systems – congratulations, you are a great partner who will make lots of money if you stick with us and send us all your customers”)
This could end up being bad for everybody. Enshittification at work.
2. The AI company could loosen the thumbscrews by actually making things easier for developers and end users alike. There will always be a need for developers because a lot of people will simply not want to bother interfacing with these monolithic platforms, but there will also be a growing call for end users to feel more empowered. And they might agree to that in exchange for the AI company having more skin in the game of the downstream businesses. Aim for customer and developer retention not by draconian efforts to confuse, obfuscate and lock everyone in, but by simply being an awesome service that helps everyone thrive.
But what kind of revenue relationship would actually be acceptable, productive and fair? How would we feel about, say OpenAI setting up a business model where they make our lives easier by facilitating our goals better in exchange for a cut of our livelihood? What would they need to offer to do for us?
They could help with “agents” that facilitate entity formation, domains, hosting, websites, operational systems, compliance, logistics, marcomms, sales infrastructure, or supply chain coordination, depending on the business model.
This could resemble a franchise or venture-studio model, where the AI provider supplies a complete operating environment and the user supplies direction, expertise, creativity, market insight and boots-on-the-ground human interaction. The value exchange becomes tangible and ongoing. The giant blank brain becomes a distributed hyper-local, regional, national and/or international operating system, tailored to our individual objectives.
For such a model to be sustainable, it would need to be demonstrably fair. Users would need to succeed financially, with transparent terms, voluntary participation and revenue that is meaningfully shared. Trust would be built through outcomes and reliability. Contractual lock in would be far more consenual than coercive.
If AI platforms want long-term loyalty, they will need to show how they help people build durable value in the real economy, while ensuring that prosperity flows in both directions.
If they don’t get this right, both the model itself and the optics of it all, it could feel a little too predatory and rent-seeking. The age-old problem of shareholder value being the only metric worth thinking about. They’d better get their own AI to work that one out – and fast.
Otherwise, I’m taking my electric-meat-brain elsewhere. 🙂
[100% human-penned post above]