Intellectual Property, Information Technology & Cybersecurity

ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation

Author: Tilbe Birengel

Introduction

ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing.[1]

ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors. It is good at improving code in programming languages such as Python in a matter of seconds. With the release of an advanced model, ChatGPT-4, in March 2023, it has achieved higher performance and greater functionality in many aspects, including problem solving and image processing.[2]

AI models are expected to open up many opportunities by increasing productivity, creating new search engine architectures, and reducing costs in healthcare, finance and public administration.[3] The mass development in this area by OpenAI and its competitors such as Google and Meta are exciting to watch, but raises major concerns, which are discussed below.

Potential Risks of ChatGPT

Given that LLMs are a form of “generative AI”, these models generate their output according to the training data in hand which may include copyrighted material, confidential, biased or discriminatory information.[4] This means that any data fed into the system becomes a training material for the next models.

The massive data collection and process for AI training does not comply with applicable privacy rules such as GDPR[5] as it lacks transparency and legal justification.[6] Furthermore, the chatbot does not provide an immediate option to remove the previously stored data. It is unclear whether the data collected will be shared with OpenAI’s other tools, leaving it open to information hazards.[7]

Read the entire article.

< Back