Photo of James Hewes, president and CEO of FIPP

Q&A with James Hewes

Q&A with James Hewes

President & CEO, FIPP


Hewes, leader of the global media organization, talks about the newly released Global Principles for Artificial Intelligence, publishers’ right to compensation for AI “training,” the need for transparency from AI platforms, and the existential threat of unbridled AI.

FIPP co-produces the annual FIPP World Congress and produces a wide range of in-person and virtual events, as well as relevant content covering topics such as digital subscriptions, AI, e-commerce, and live events. FIPP also carries out advocacy activities around copyright, piracy, sustainability, and diversity, equity, and inclusion (DE&I).


CLI: Last month, FIPP joined with 25 other news, magazine, book, and scientific and academic publishing organizations to endorse the Global Principles for Artificial Intelligence (AI). “The collaboration addresses critical dimensions relating to intellectual property, transparency, accountability, quality and integrity, fairness, safety, design, and sustainable development,” according to your joint news release. What is the main takeaway?

Hewes: We want to be clear that the “training data” these Large Language Models (LLMs) and generative AI models use is our content. They’ve assumed that they’re allowed to use our content, and they’ve taken it for free. So, the most important takeaway for them is, “You’re using our content. We want to see what you’ve used, and you have to pay to use it.”

The Global Principles also provide guidance in what will inevitably become a series of negotiations with AI companies over the future development of these tool sets. Hopefully, these principles will give publishers, especially small-to-mid-sized publishers, a bit of a cheat sheet they can use to frame those negotiations.

We don’t want to stifle the development of tools to take away repetitive, tedious work. The danger is that we inadvertently agree to allow AI to put all of us out of business.

CLI: A lot of AI training has already been done with publishers’ content across the web. Isn’t the horse out of the barn?

Hewes: Developers that have built AI tool sets trained using content for which they did not have the rights are on dangerous ground. It may be hard to reverse this trend, but just because something’s hard to do doesn’t mean you don’t have to do it. If you build a building illegally, they make you knock it down.

When it comes to policy decisions and negotiations around IP rights, I think we’ll end up with a patchwork of activities, some of which will be done by the media companies, some by trade associations, and some by existing or new collective licensing organizations.

Generative AI developers have two options. Paying publishers for the right to use their content is one option, and that’s where content licensing could play a role. The second option is to remove copyrighted content from the training datasets. As I understand it as a non-technical person, this can be done during retraining as the application goes from one iteration to another.

CLI: Do you think generative AI is a threat?

Hewes: It’s good and bad. While we publishers might be accused of being a barrier to innovation and progress, there is a danger with AI that it could be an existential threat if we get it wrong. 

Democracy relies on journalism being able to be created in a financially sustainable fashion and distributed to as many people as possible. The problem is that AI treats journalism as if it were water, and all water is created equal. Of course, it’s not all created equal. There’s drinking water, and there’s water you shouldn’t drink. It’s the same with content, and there’s a ton of content that is created either deliberately or inadvertently for malicious purposes.

The Global Principles basically say to AI developers, you need to be able to demonstrate how these models work so that people can understand the processes that generated an answer. And you need to be liable and responsible for the answers and the output AI is producing.

There is a phrase in the Principles that is unprecedented for our industry. It says, “AI systems bear the promise to benefit all humans, including future generations, but only to the extent they are aligned to human values and operate in accordance with global laws.” 

This is a technology that could potentially impact our whole species, and you cannot say for something of such import, “We’re not going to show you how it works because that infringes on our ability to make money from it.” My personal view would be that we are at a point where the rules of capitalism don’t necessarily apply. 

There’s going to have to be a much greater degree of transparency than has been the case with some other innovative technologies in recent years.