What "Knowledge as a Service" actually meant: and why DeepSeek made it cheap

An idea sketched two years ago assumed expertise-as-licensable-artifact would be a premium-tier product. The economic floor just dropped through that assumption. Worth re-examining what the idea was actually about now that the substrate has changed.

A human silhouette with neural pathways extending into a glowing crystalline data structure

A thought experiment from April 2023 tried to make the case that personal expertise could become a licensable, portable artifact, something a person could capture, ship, and earn from rather than something an employer extracted as a byproduct of employment. It was speculative when it was written. Most of the technical and economic preconditions weren't yet in place.

The technical preconditions are largely here in early 2025. The economic ones moved faster than the sketch anticipated, and they moved in a direction that changes the shape of the idea. Worth being honest about which parts of the original framing held, which parts didn't, and what the new foundation actually makes possible.

What the original sketch actually claimed

Stripped of the speculative scaffolding, the original idea had three parts:

  1. Expertise has a structural form, a way of reasoning, a set of priors, a vocabulary, an instinct for what's important, that's distinct from the facts the expert can recite. The valuable part is the structural form, not the facts.
  2. Conversational AI was the first interface that could meaningfully serve that structural form to other people. Not as text on a page, not as a video course, but as a thing the recipient could query, push back on, and adapt to their own situation in real time.
  3. If the expertise can be captured and served, it can be licensed, and the resulting market would look more like the market for software libraries than the market for consulting hours.

The first claim is, in retrospect, the part that's held up cleanest. The whole adapter / fine-tuning literature of 2024–25 has effectively been the field discovering that "structural form versus facts" is a useful operational distinction. RAG handles the facts. Adapters handle the priors. Tool-use handles the instincts. The split that the original sketch proposed has turned out to map onto how the technology actually got built.

The second claim was correct in direction and wrong in cost. Conversational AI did become an interface that could serve structural expertise. The interesting part was which conversational AI could do it. In April 2023 the implicit answer was "OpenAI's, because that's the only one that works." That answer remained roughly true through late 2024.

The third claim (the licensing market) is the part where the sketch and reality have drifted apart in the most interesting way.

What DeepSeek's pricing actually changes

The original sketch's economic argument leaned on an assumption that personal-AI inference would be expensive enough to justify subscription pricing. Five to twenty dollars a consumer per month, paid to whoever owned the model, a chunk of which would funnel back to the expert whose adapter was being served. That math required inference to remain a premium-tier service.

DeepSeek-R1 is fifty-five cents per million input tokens and $2.19 per million output. For most realistic personal-AI workloads (call it a few hundred chat turns a day with reasonable answer length) that puts the inference cost for a single person well under a dollar a month. The subscription tier the original sketch assumed has effectively been disproven by the cost floor that just landed.

This sounds like bad news for the licensing thesis. It is not. It changes the shape of the licensing thesis.

When inference is cheap enough that the marginal user costs less than a song on the radio, the value capture has to move somewhere other than the inference layer. What's interesting is that "somewhere other than the inference layer" is exactly where the original sketch was claiming the value would actually live: in the captured expertise itself, the adapter or the persona-defining artifact, not in the compute that serves it.

The economic model that fits the foundation now isn't "subscription to access this expert's AI", that's still a closed-platform retread. It's closer to: a one-time license, or a per-output royalty, or a usage-tied fractional cut, applied to an artifact that the licensee runs on their own infrastructure for inference costs that approximate zero. That shape has more in common with how stock photography or software libraries are licensed than how SaaS is sold.

The marketplace gap is now the interesting problem

The 2023 follow-on piece could there be a marketplace for AI training data made the explicit prediction that a marketplace would be needed and would emerge. The marketplace hasn't emerged. Hugging Face is a registry, not a marketplace, there are no royalty splits, no license-tier mechanics, no per-use accounting that flows back to the original contributor. The closest commercial efforts in the space are content-provider deals between large publishers and large labs, which are the opposite of the distributed-creator model the original sketch had in mind.

There are real reasons the marketplace hasn't materialized. The closed-frontier platforms have no commercial incentive to build a third-party adapter market on top of their own surfaces, it would commoditize their own value-add. The open-source side has the technical pieces but not the metadata standards, the legal frameworks, or the payment plumbing to make royalties flow. The legal arguments about training-data ownership are still about base-model training, not adaptation rights, and they're moving slowly through the courts.

What's changed since R1 is that the technical basis for an adapter marketplace is now fully in place. You can take an open-weights base model under a permissive license, fine-tune an adapter for a specific reasoning style, and ship the adapter as a standalone file that runs on commodity inference. The piece you'd need to add is the commercial scaffolding, license terms, attribution mechanics, royalty plumbing. None of that requires a research breakthrough. It requires somebody deciding to build it and somebody else deciding to use it.

What the framing got wrong about timing

The original piece made the implicit assumption that the technical and commercial timelines would track each other, that as the capability landed, the market structure would form around it. That assumption was wrong by at least a couple of years and possibly more.

What's actually happened is that capability has run ahead of structure by a wider gap than seemed plausible in 2023. The technical pieces for personal-AI ownership exist. The economic logic for it is now more compelling than it was, not less. The legal frameworks haven't moved. The marketplaces haven't formed. The cultural assumption that "your AI assistant" is a feature of someone else's platform rather than an artifact you own remains the default in essentially every consumer-facing implementation.

That's the gap worth tracking. Not whether the technology exists (it does) but how long the structural lag persists between capability and the institutions that would let it become a market. The 2023 sketch assumed two to three years for that lag to close. It's looking like the answer is at least five.

The economic floor moving down doesn't change the direction of the underlying claim. It changes the intensity of the claim, when serving expertise costs essentially nothing, the only reason it stays trapped inside a platform's walled garden is that nobody has built the alternative yet. That's a different kind of problem than the one the original sketch was wrestling with. It's a problem of organization, not of capability.