Skip to content




Bill Gross thinks AI companies are running out of ways to avoid paying creators

Featured Replies

rssImage-e93e0eb5eb6013efe7ef895a18993f8f.webp

Bill Gross has a long history of betting on technological shifts and watching those bets pay off. But the latest proposition from one of Silicon Valley’s most storied founders and investors depends on forces far beyond the Bay Area.

With ProRata, Gross is betting he can build a market in which publishers and creators can see how their work informs AI-generated outputs and get paid accordingly.

He doesn’t expect AI companies to participate out of goodwill. In fact, Gross has already launched a spinoff, Gist, which allows ProRata partners to generate additional revenue from ProRata’s indexing of their work. Instead, he believes outside pressures will eventually leave AI operators with little choice.

In a conversation with Fast Company, Gross discusses how he thinks that shift could happen, and why he believes some of the biggest names in AI are losing the plot. The interview has been edited for length and clarity.

Where did the inspiration come for ProRata?

The inspiration came when The New York Times sued OpenAI a few years ago. I thought, wow, I really do think that the AI companies are stealing stuff from everybody.

I think that lawsuits are one way to solve it, but I think a better way to solve it would be a business model that’s fair to everybody. And I thought, just like Spotify shares revenue with artists, just like YouTube shares revenue with artists, why don’t the AI companies share revenue with artists?

If I can solve the problem of unscrambling the egg, figuring out where the answer came from, then I could use that as the attribution breakdown for sharing 50% of the revenues, just like Spotify shares revenues with the artists. So, I worked for a few months on coming up with a method to do that, and I was successful.

So, then I patented that, and then I said, now, let me go see if I can get publishers to join. So we have now signed 1,500 publications in the last two years.

Now I need to convince the big AI companies to share their revenues 50/50 and use my attribution method, and that is probably going to take two things to happen. One, they need to lose their lawsuits. Two, they need to get profitable, so they actually have revenues to share.

To be clear: No AI operators are actually paying money through ProRata?

Not yet. It’s a long game.

There’s a few reasons why they’ll have to. One, I think they’re going to lose their lawsuits, but two, even if they don’t lose their lawsuits, it’s the right thing to do. But three, if they don’t have current information, their answer quality will go down.

If one company does it—I think Microsoft is leaning in to be the first company to do something like this—that will put pressure on all of them to do it. So I think we need one domino to fall.

The court cases have been interesting because so far there isn’t a whole lot of consistency. In the Anthropic case, the judge said it’s okay for an AI to get smarter by reading published work.

The New York Times was able to show that there were large excerpts of their work literally in the answer.

I think the right way to pay is based on output. That you can crawl stuff to train your model, but if you use the content in the output, that’s different.

Some ProRata clients like the Atlantic already have deals with OpenAI. What do you add?

Those deals were all input for crawling content to train the model. Our proposal is that you should also get paid on the output. I urge all of our publications to get money both ways. [Editor’s Note: The Atlantic’s arrangement with OpenAI does cover output.]

The site says you serve publishers and creators; how small can the publisher and creator get?

We’ll go to anybody. We have some people who are just vloggers. When this model is correctly affixed, I think smaller publishers are helped more in an AI era.

Because you win in the AI answer based on the quality or uniqueness of your content, not by the size of your brand. It’s actually a fair method of giving the long tail proper compensation.

About your attribution; have you gotten pushback from AI operators that you’re seeing something that isn’t there?

We haven’t got any pushback on that. What we do have is the support from the publishers that the attribution is close enough to being correct that they would sign off on it.

In other words, think about this like a Nielsen rating. Yes, it isn’t perfect. But all the advertisers accept it and they accept the fact that it’s a statistical sample.

Let’s now pivot to your your spin-off out of ProRata, Gist.

While the lawsuits are progressing, and while we’re waiting for the AI companies to get profitable, we want to help publishers be successful in any way we can. So, since we have already crawled their content for the purpose of attribution, we understand a lot about what readers are reading so we can make a number of systems for them to help them monetize their content better.

People are trained to ask questions now, so if you show related questions on a website, click-through rate is very, very high. Like, between 3% and 5%, and therefore we can monetize those very well.

Gist’s site-specific search is interesting to me because I keep finding that some of the worst performing search UXs around are on news sites where I’ve worked—where I have to switch to Google to find what I know I wrote.

The site searches are notoriously bad. I think most publishers just haven’t invested in it to make it good. And, of course, Google invests billions of dollars to make it good. Our AI site search is better than Google because we haven’t just done keyword search. We’re actually understanding the context and the knowledge graph of each article.

Basically, here’s the thing that large language models do so great: LLMs actually have a fake, but deeper understanding of the story. They don’t actually understand the story, but they have a fake understanding of it through statistics of words. Because they have such a fake deep understanding of the story, they can predict what questions you might have in your mind next.

When you show that people click on it, that gives you another chance at another page view of that visitor way better than if they just left and went to Google and typed in that question. So, giving a website the power to hold on to the user a little bit longer is something we feel very proud of, and is really working.

In what ways does ProRata use AI itself?

We use it for everything, even the GEO [generative engine optimization] product which we just launched. It was about 13 people for four months; it would have been 30 people for four years if we were doing it the old-fashioned way.

Everybody’s using Claude Code, everybody’s using agentic tools.

Well, we had a patent come back recently, they rejected maybe 13 out of 15 of the claims. It’s very, very hard to decipher the patent’s office rebuttal, as well as which patents they’re referring to that they think have prior art.

I took the patent officer’s rejection, gave it to ChatGPT and said help me come up with a better explanation of why I think these claims are really valid. I gave that back to the patent office a month ago. On Friday, I got the notice back from the patent office and all the claims were approved.

They only got approved on their merit, but it would have been very, very hard for me to dig through every one of the other patents they refer to and explain why my thing was different in a convincing way.

From the word “fake” you used, it sounds like you are not a believer in artificial general intelligence. Are we in an AI bubble?

I definitely think that AGI is possible, probably in a longer time frame than people believe, but I still think we have something which is incredibly useful.

It’s not AGI, but it’s wildly powerful, because, though it doesn’t have the depth of human thinking, it has a huge breadth. It has read so many things that it has what I keep calling it a fake understanding of things—but a useful fake understanding of things.

I think that valuations are not that high relative to the revenues because the revenue growth is incredible, but I feel the valuations are very high relative to the current profits because they’re losing money.

However, the cost per token is going down every day. The value per token is still going up every day. As soon as people start paying the fair price for the value they’re getting—for example, to do that patent, which was very valuable to me, I paid $20 a month. That’s too little. I would pay $100 a month.

I don’t think we’re in a bubble in that sense, say, compared to I lived through the dot-com bubble at similarly high valuations, but almost no revenues and almost no path to profitability because they had no revenues. So I think that OpenAI does have a path to profitability.

Do you see that as the case for AI companies in general?

Anthropic is going after enterprise, who can afford to pay more, is not doing consumer things like Sora and others, which are expensive and don’t bring in much revenue. And Anthropic is smartly, much to some users’ dismay, throttling people back when they’re using it too much to make sure that they get closer to profitability.

I think Anthropic is going to get profitable sooner than almost all the other companies. I think that OpenAI is finally getting religion about dropping some of the things that are way too expensive and religion in terms of, well, I have 900 million consumers, I better start charging them advertising. Then I think that Meta is on a fool’s errand right now.

I don’t know if you read the recent thing this weekend about every single thing that Mark Zuckerberg has done since Facebook has failed or been bought.

He bought Instagram; great success. He went all in on metaverse; I think he’s lost like $80 billion. I think he will lose a lot of money on AI too, because he’s not in first place, and he’s not even third place. The Chinese are doing way better than than he is. But it doesn’t matter. He’s got a money mint from Facebook and Instagram to cover it.

Since we mentioned one very rich guy with a self-created public-image problem, we have to talk about xAI.

Yeah, well, I think that the general direction of Elon’s bets are good.

I just think Elon has a habit of exaggerating the timeline of things like data centers in space, reaching Mars, full self driving, all that. He exaggerates those by about 10 years so that he can raise the money to work on them, and that is a tactic. But he’s promising things that he can’t deliver in the time frame that makes sense.

I have to close with this: What is your 10-year-out forecast for AI? Does it upend society?

I do think that AI is going to upend society, and I would like to do anything I can to try and have the upending be more uniform and more lifting everybody. I’m worried that it will not be. I’m worried that the rich get richer, and it doesn’t flow down to other people.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.