Navigating Generative AI: A Short Guide to the Emerging Risks

As generative AI technology blooms, is it time to address the pressing technology resilience concerns of data privacy, cybersecurity, and intellectual property rights? Let Waracle be your guide to this mission-critical endeavour.

teamwork iconteamwork icon
teamwork iconteamwork icon

Diverse People, Diverse Perspectives

Waracle is an inclusive, inspiring & developmental home to the most talented and diverse people in our industry. The perspectives offered in our insights represent the views and opinions of the individual authors and not necessarily an official position of Waracle.

In an age where artificial intelligence (AI) has taken the world by storm, generative AI systems have emerged as the enfant terrible of the AI family. While these prodigious systems create new opportunities for businesses, they also pose a wide range of business risks that must be addressed. 

Let’s explore the potential minefields of generative AI and how firms can embrace this technology while ensuring they are regulation ready.

Peering Through the Fog of Data Privacy

With companies such as JP Morgan, Accenture, Amazon and Verizon having already moved to ban the use of ChatGPT at work, due to privacy and security concerns, it’s logical to think that many companies will follow suit.

But why do these actions provide such a prescient warning?

The lifeblood of generative AI systems is data, and personal data often lurks within the datasets used to train these intelligent machines. 

The lack of transparency regarding data collection, purpose and usage, can expose firms to increased risks when developing or using generative AI APIs. As the landscape evolves, organizations must find ways to navigate the murky waters of data privacy controls and establish an understanding of the risk acceptances that need to be in place to prove due diligence has taken place.

It’s time for firms to re-evaluate their approach to data retention and access rights when it comes to requests made via generative AI systems. Additionally, as generative AI adoption surges, companies need to pay closer attention to the resilience of their technology and cloud infrastructure. This is particularly crucial for firms with a vast number of employees or clients hopping on the generative AI bandwagon.

Guarding the Cyber-Frontiers of Generative AI

While generative AI presents new opportunities, it also opens the door to unprecedented cybersecurity risks and adversarial uses. 

For example, biometric security systems such as facial & fingerprint recognition could face a new threat level due to generative AI’s capabilities to replicate images in formats that could be used to unlock systems.

There have already been examples of investigative journalists doing exactly this to illegally access other people’s bank accounts.

As these AI systems become more advanced, it’s essential for businesses to stay vigilant and proactive in addressing potential cyber-threats as they morph, change and develop.

Who Owns AI-Generated Content?

As generative AI continues to create awe-inspiring content, the legal conundrum of ownership remains a hot topic. 

For example, if you create a publication utilising ChatGPT as your main source of draft creation and editing refinement – does the owner of that system, data set and tool have any right to the ‘thing’ that is created?

Since laws and interpretations may vary across jurisdictions, firms must carefully consider the intellectual property rights tied to AI-generated content. Engaging legal experts early on can help mitigate potential disputes and keep organizations ahead of the curve. Moreover, it’s vital for firms to identify any current or future risks of litigation related to AI technology usage, such as the proposed EU AI Liability Directive.

According to an article published in Forbes, Margaret Esquenet said:

“For a work to enjoy copyright protection under current U.S. law, “the work must be the result of original and creative authorship by a human author… in the absence of human creative input, a work is not entitled to copyright protection. As a result, the U.S. Copyright Office will not register a work that was created by an autonomous artificial intelligence tool.”

Seems quite definitive… for now.

What are the Next Steps for Generative AI Adoption?

Generative AI is undoubtedly an exciting frontier that offers immense potential for businesses. However, it’s essential for organisations to address the challenges it presents to reap the full benefits. 

By acknowledging the risks, embracing innovation and ensuring regulatory readiness, firms can successfully navigate the complex yet rewarding world of generative AI technology.

Navigating the complex labyrinth of generative AI risks can be daunting, but fear not… Here are some key takeaways for businesses to review as they venture into the realm of generative AI:

  • Pinpoint areas where generative AI could be used internally or by external stakeholders, such as clients, suppliers, or other key players.
  • Assess whether generative AI introduces new risks or regulatory obligations to your organization.
  • Establish your firm’s risk appetite for generative AI adoption and develop corresponding policies and procedures.
  • Evaluate the completeness and adequacy of existing controls, identifying any requirements for additional policies and procedures.
  • Address any control gaps to ensure that risks associated with generative AI are effectively mitigated.

Waracle are actively exploring how GPT-infused software may push the boundaries of what can be achieved and how it will develop over time, so if you are concerned and want to discuss your thoughts on the business risks of generative AI, please get in touch with our team to talk it through.

We can be your guide through this nascent space!

Share this article

Authors

Blair Walker
Blair Walker
Head of Marketing

Related

Article30 July 2024

LLMs in Healthcare Diagnostics