The UK Publishers Association recently called for new laws to curb the perceived threats posed by AI generative models like ChatGPT. Whilst concerns about how these new technologies could potentially enable more plagiarism and copyright infringement are understandable, the fears seem overblown at this stage.
Rishi Sunak's government would be wise to avoid rushing into heavily regulating AI text generation in its current nascent state. This technology remains in its infancy and will likely continue rapidly evolving in unpredictable ways in the coming years. Any broad regulations enacted today could easily become obsolete or even counterproductive tomorrow, as the capabilities and use cases shift. It's far wiser to take a measured, iterative approach focused on addressing specific demonstrable harms only if and when they arise.
The publishing industry, after all, has faced seismic disruptions many times before, from the printing press to paperbacks to ebooks. While initially resistant to these changes, in the end publishers adapted and endured. AI generative models are unlikely to fundamentally undermine the entire publishing model, at least not any time soon. These tools still have major limitations when it comes to accuracy, nuanced creativity and deeper comprehension. Most readers still have a strong preference for books written exclusively by real human authors, not machines.
Rather than lobbying to constrain AI text generators outright, publishers would be better served by thoughtfully exploring how these technologies could actively assist writers in their creative workflows. The proper role for government in cases like this is encouraging continued innovation and competition in emerging technologies, not preemptively restricting them based on speculative harms alone.
If highly advanced AI text generation does eventually enable substantially more copyright infringement, existing laws can be updated and enforced accordingly at that time. But we must be careful not to ignore the many potential positive applications in education, accessibility for people with disabilities, empowering new writers, and beyond that we risk stifling with premature regulation driven by panicked what-ifs.
The threat currently posed by AI generative text models to the publishing industry as a whole has been greatly exaggerated. Rather than immediately calling for sweeping new government interventions, publishers need to adapt to new technologies as they always have throughout history. We should carefully monitor for specific issues that may require tailored policy responses, while also allowing space for innovations and benefits we may struggle to even imagine at present. Knee-jerk reactions now could carry serious unintended consequences down the line.