0

Content creation in the age of AI: How to maintain trust and integrity

Organizations have a vested interest in generative AI to rapidly scale content creation towards boosting credibility and authority in their industry, but it’s not a set-and-forget endeavor. In the race to find opportunities for business productivity and publishing success, an equally pressing measure is to put guardrails in place to protect integrity and reputation amid the AI content revolution.

Even if ethics are front and center of your organization, content creation is complex and prone to risks with respect to matters of intellectual property, originality, and legitimacy of information and claims—and that’s before generative AI is applied. A fundamental question emerges: how can we uphold the integrity of published work and principles of integrity amidst the growing threat to brand reputation, alongside the mounting costs of redactions?

In this blog, we explore how organizations can be proactive rather than reactive in the face of a changing content and publishing landscape, protecting commercial interests, and fulfilling their duty of care, in order to produce ready-to-publish content that stands up to public scrutiny.

What is the role of AI and human output in modern content creation?

Organizations produce a wealth of content, and whether they’re releasing a report, whitepaper, eBook, blog, or research paper, on owned media or distributed externally, the stakes are high. This is especially true for scientific research publication owing to higher benchmarks for accuracy, transparency, ethicality, and innovation, and its reach across industry magazines and journals, databases, and media outlets.

It’s likely your organization is ramping up content production, previously limited by the finite capacity of human resources and now delegating aspects to generative AI to maximize output. With AI by their side, organizations can scale content publication at an unprecedented rate, but it remains a risk-and-reward endeavor. Inaccurate. Generic. Biased. These are but a handful of adjectives people have used to describe generative AI’s less desirable output. More than a year on from ChatGPT’s stratospheric rise, we know generative AI carries its own implications and recognise the need for human oversight.

Make no mistake, we are in a transformative period, and how organizations manage their commitment to quality, trust, and integrity will set the tone for future AI engagement.

From the perspective of safeguarding business success and society at large, organizations are contending with three key imperatives:

 

  1. Defining and adopting responsible generative AI guidelines and use cases to govern AI.
  2. Quality control of both AI and human-generated content to uphold content standards.
  3. Making integrity measures part of the daily workflow to mitigate risk.

 

Where does integrity fit when producing trustworthy content or research?

Integrity measures to validate output are robustly defined and well-represented in academic settings, but tend to be more ambiguous during the content production and publication that occurs in organizational environments. Even if an organization has committed to a culture of integrity, how does this play out in the writing and publication process and across individual contributors, for example?

Publishing retractions of formal research papers are more common than you may think, with misconduct being the primary cause and involving issues such as duplicate content and plagiarism. In evaluating the protocols in place to ensure only reliable, trustworthy content makes it through your organization, there are several factors to consider, including cross-departmental silos affecting reviews and approval, insufficient supervision and oversight, and various individual or institutional blind spots.

Irrespective of the content format or its placement, most organizations would subscribe to the following three pillars of ethical content production:

 

  1. Accuracy: Importance of fact-checking and reliable sources.
  2. Transparency: Disclosing biases, sources, and affiliations.
  3. Integrity: Avoiding ethical misconduct such as plagiarism, acknowledging intellectual property, and equipping audiences with the best possible information.

 

The concept of empowering an audience is a particularly compelling one. Organizations have an opportunity, and indeed a responsibility, to incentivise quality assurance and integrity checks at every turn. In order to move professionals away from misconduct, moving them toward integrity is an important part of the equation, and is a value reinforcement that helps inspire responsible authorship and uphold reputation in pursuit of the business agenda.

This proactive approach helps employees navigate potential risks tied to research and content activities that are less straightforward or prone to shortcuts, including truthful data analysis in report-writing, acknowledging writing credits in multi-author collaborations, and using generative AI responsibly to avoid printing falsehoods, bias, or otherwise subpar material.

On the flip side, we’re seeing the ramifications of failing to prioritize quality and integrity checks in content production and release. Consider the publicity generated in Australia when academics used Google Bard AI as part of a submission to a parliamentary inquiry, which contained fake case studies that made unfounded accusations about major consultancy firms.

Or the backlash for a New York lawyer who cited fake cases generated by ChatGPT in a legal brief filed in the federal court.

Whether such ‘AI hallucinations’ decrease in future iterations of the technology is besides the point; content and the research or source material that underpins it, must be thoroughly checked to ensure it passes the test of reliability and trustworthiness.

How can organizations uphold content, research, and publishing best practice?

Not unlike academia which can be beholden to a ‘publish or perish’ mindset, organizations may feel the pressure to publish copious amounts of business research or content in the race to innovate. And when tight deadlines are in the mix, it’s not altogether surprising that mistakes or intentional shortcuts can occur.

To complicate matters further, the scope of business-related publishing outside of publishing houses and journals—such as self-published reports, whitepapers and articles across both owned and paid media—tends to evade more formal or rigorous checks. Nonetheless, content breaches spotted by vigilant regulators or competitors is an impending organizational risk.

For instance, missing references that imply ownership of another’s idea or content is known to surface in all manner of long-form content. Such missteps can be driven by knowledge gaps, lacking or ambiguous business policy, and a willingness to cut corners based on the perception that there are little to no consequences. Aside from the unprofessionalism it casts upon an organization and the cost of redactions, a history of citation errors can even keep an author from being published.

Or consider an employee's indiscriminate use of ChatGPT or other generative AI tool—perhaps to craft a thought leadership asset—which tends to produce generic content from recycled ideas. Without elevating said AI output with a human understanding of brand identity and objectives, such content does little to advance an organization’s authority in the space and may serve to undermine publishing success.

Key considerations:

 

  1. Establish clear content guidelines and quality assurance processes, including use of AI.
  2. Encourage peer review amongst relevant stakeholders to verify claims.
  3. Prioritize and reward writing with originality to distinguish from derivative content that may infringe on copyright.
  4. Invest in technology that can help flag potential issues in authorship, citations, and originality.

 

To reinforce the earlier point, a strong commitment to integrity alone, without appropriate checks and balances, can still leave organizations at risk. Without an integrity workflow to guide publishing efforts and flag issues, the door is left open for professional misconduct to spread, with newcomers to the organization or industry at most risk.

iThenticate’s role in safeguarding content and publishing outcomes

Imagine that when your employees approach a research report or an article, for example, they can be supported in conducting text similarity checks, verifying attribution of sources and correct citations, and ensuring originality throughout its production. Where manual checks are prone to inconsistencies, integrity software can make this endeavor reliable and scalable. And when businesses invest in proactive measures as part of their workflow, they foster a culture of integrity that moves them closer to ready-to-publish content that stands up to public scrutiny.

Each year, iThenticate checks 10 million documents, earning its reputation as the most trusted similarity checker by the world’s top researchers and publishers. Powered by an industry-leading database of scholarly content, student papers, and current as well as archived web pages, iThenticate is a valuable tool that diverse organization types can use to facilitate their research-writing or corporate content prior to publication and mitigate risk.

With the addition of AI writing detection in 2023 to reflect the evolution in integrity measures, iThenticate lightens the load and effort associated with checking for AI-generated content, to protect against misuse of AI tools and maintain accuracy and transparency in high-stakes content publication.

Pairing technology with human oversight can empower people within your organization to be aware of and accountable to guidelines around content publication while reinforcing a culture of integrity that prioritizes quality content and minimizes the threat of brand damage and redactions.

Overview: Trustworthy content publication amid the AI revolution

Everywhere you turn, there are discussions about generative AI’s impact and rules of engagement for human-AI partnerships and the outcomes that can be achieved. It’s clear that organizations have much to gain from generative AI in boosting productivity and content and publishing scalability, however amplifying brand authenticity and integrity is key, and in this sense, the human factor reigns supreme.

We also know that even without AI involvement, content can be prone to errors, ethical breaches, or found to be generally derivative. Therefore, the quality of human output and our judgment is of renewed significance as generative AI takes on more of the content creation load. The task at hand is for organizations to better preempt issues that arise during the production process—whether human, AI, or some combination thereof—and address them prior to publication.

The importance of an integrity workflow to support content guidelines cannot be overstated as organizations expand traditional research and writing practices in pursuit of publication. The ultimate purpose of specialized tools such as text similarity checking and AI detection, are to help nurture a culture of integrity within the organization and provide peace of mind that your content assets and publishing record are aligned with the values of trust and innovation.

Discover iThenticate to publish with confidence 

Reply

null

Content aside