ITL #597 Artificial Intelligence: a future we (don’t) believe in?
1 month, 1 week ago
(Comments)
Clear ethical standards and frameworks are needed to separate truth from fakery. By Nikolina Ljepava.
We live in an era of fascinating changes. Technology is advancing faster, becoming an inseparable part of daily life and work, with new and advanced technological solutions appearing daily. What once belonged to the realm of science fiction is increasingly becoming a reality, and technology is playing a more significant role in our lives over time.
Traditionally, innovations have been the drivers of progress, and the last 30 years of technological innovations have brought fundamental changes in how we work, communicate, and live. Artificial Intelligence (AI) entered our lives slowly but steadily, often unnoticed by those unfamiliar with how predictive AI models work: it's been personalizing Internet content, delivering targeted advertising, and serving as personal assistants on our phones.
However, everything changed in November 2022 when OpenAI made its generative AI model ChatGPT available for testing. The user-friendly interface, coupled with natural language processing enabled a wider adoption of those tools. Unlike previous AI models, which required specialized knowledge, the simplicity and wide availability of ChatGPT allowed every Internet user to interact with this AI tool, ask questions, and receive content and answers generated by artificial intelligence.
And so, we entered an era of widespread artificial intelligence applications. In less than six months, artificial intelligence transitioned from a topic reserved for a limited number of specialized experts to the central point of discussions in various disciplines. We are increasingly witnessing debates about artificial intelligence's current and long-term impact on people and society in the media and social networks, often accompanied by discussions about possible future utopian and dystopian scenarios.
In the public interest spotlight
The ease of access and use of generative AI models like ChatGPT, Google Gemini, and others, as well as their potential uses, have overnight placed these technologies in the spotlight of public interest. Generative AI models have radically transformed the paradigm of creating and finding content. Content that previously required hours of work can now be created in seconds, whether textual, visual, or video.
At the same time, an increasing number of users have started replacing traditional Internet searches with questions posed to artificial intelligence. Microsoft quickly recognized this trend and integrated a generative AI model into their Bing search engine, allowing direct questions and instant answers. Following the textual models, generative AI models for creating visual and video content emerged: Midjourney, Dall-E, Leonardo, Synthesia, HeyGen, and specialized models for application in various disciplines. The number of AI startups is continuously increasing, and the market for generative artificial intelligence products is expected to grow by about USD 20 billion annually until the end of 2030. Given the ease of use and wide range of applications, these technologies will play an increasingly significant role in the future.
On the other hand, the widespread utilization of generative AI has raised many questions in numerous professions, especially in communications, where the possibility of manipulation is high. Communications professionals are facing the challenge that artificial intelligence can generate any information, and that the validity of information is more threatened than ever.
The era of social media has already brought the problem of spreading fake news and misinformation, which is very difficult to control. The use of artificial intelligence to create content that initially appears completely legitimate has elevated this problem to an even higher level.
Can we trust our own eyes?
With increasingly sophisticated AI tools for creating visual and video content, the question arises whether we can still trust our own eyes and whether we will, in the future, critically re-examine every content we see to determine whether it was potentially created by artificial intelligence. Can we even recognize generated content? According to a recent study conducted in the United Kingdom, more than a third of respondents could not distinguish real content from generated (deep fake) content.
And this is just the tip of the iceberg. How do we regulate the use of these tools in regular communications activities? Is using artificial intelligence for writing media releases and news articles acceptable? With increasing automation and the refinement of generative AI models, is the future of online communication one where we will increasingly read news, social media posts, and statements entirely generated by AI tools? Should content created by artificial intelligence be specially marked? How do we regulate communication with personalized AI assistants that are gaining broader business applications? What level of personalization is acceptable, and how are our data used by predictive AI models? Last but not least, will artificial intelligence replace journalists, administrative workers, and communications and public relations professionals in the future?
It has been over a year since we started using ChatGPT and similar tools, and we still do not have clear answers to most of these questions. Regarding communications and public relations, the key role of professional associations and organizations is to establish ethical guidelines for using artificial intelligence in these professions. Trust is the foundation of all public relations and communications, and in the era of post-truth and generated information, the question arises of whom we can trust today. In the age of rapid information exchange and digital media combined with the unethical use of artificial intelligence, the line between fact and fiction can easily blur, and the possibilities for abuse are unlimited.
IPRA at the forefront of ethics
The International Public Relations Association (IPRA), led by its president Natasa Pavlović Bujas from Serbia, is one of the first professional associations to recognize the need for creating Ethical Guidelines for the Use of Artificial Intelligence (AI) in Communications and has formed an international working group of communications professionals to work on their establishment. Currently, there are too many open questions and ambiguities related to using artificial intelligence, especially generative models used for content creation.
Setting clear standards of behavior and work for professionals in public relations and communications and defining acceptable and unacceptable use of artificial intelligence will be crucial in the future. And that is only the first step. Once established, the standards must be continually revised in line with the development of new technologies and ensure timely education and implementation of standards in practice.
The potential for using artificial intelligence to improve processes and increase productivity is significant, but with it comes great responsibility, especially in communications. Transparent and responsible use of AI tools, balancing technological advancement with preserving professional values and standards, is challenging for us and future generations. At this moment, we are responsible for establishing and genuinely applying guidelines that will enable ethical and responsible use of these technologies. If we do not define and implement clear frameworks and ethical norms, I fear we will face a deceptive future in which we can no longer distinguish lies from the truth.
- IPRA has recently launched an AI Chapter.
The Author
Dr. Nikolina Ljepava
Dr. Nikolina Ljepava, Acting Dean, College of Business Administration, The American University in the Emirates (AUE).
mail the authorvisit the author's website
Forward, Post, Comment | #IpraITL
We are keen for our IPRA Thought Leadership essays to stimulate debate. With that objective in mind, we encourage readers to participate in and facilitate discussion. Please forward essay links to your industry contacts, post them to blogs, websites and social networking sites and above all give us your feedback via forums such as IPRA’s LinkedIn group. A new ITL essay is published on the IPRA website every week. Prospective ITL essay contributors should send a short synopsis to IPRA head of editorial content Rob Gray emailShare on Twitter Share on Facebook
Comments