A lot has been said about artificial intelligence (AI). It is reshaping global business and everyday life worldwide and by no means, it is a new industrial revolution. Mankind’s striving for an easier life has led to the creation of a fast-learning, self-aware creature. It shall irreversibly become an unexpected (or perhaps unwanted) member of society. We have witnessed the universality of AI’s application. Here, we are focusing on the use of AI in news media and its potential to shape public opinion.
Media widely employ AI to gather news information. It may also filter false data and fake news, contributing to the accuracy and truthfulness of the published information. Consequently, AI supports one of the main principles of journalism. It can give the public comprehensive information on events or persons, based on credible sources. AI also tracks preferences, boosting the income of media outlets.
It is obvious that AI is going ever deeper in the media. But one must not forget the core of true journalism – making a professional and fair balanced editorial decision on giving the public what it wants to hear or see and what it needs to hear or see. By the power of their pens, journalists are steering the public opinion. As professionals, they have to diligently monitor public developments and process information, and project what the public must be aware of, so it can actively participate in shaping the society and controlling the politicians. Will we let a robot shape our opinions on the subjects of public interest?
This question is not just ethical but also a social one. News media are specific, a mix of informing the public on matters of public interest and media services. The business side must not jeopardize the non-business one. This fine line between the two is frequently not that visible. The orientation of owners of media is dominantly towards business aspects. As AI contributes to the media business more significantly, will AI also decide on what the public needs to hear?
Is there enough ethics in a robot to assess the duty of the public to make the right social and political decisions? Maybe we should not cross this line. At least until AI learns how to become an artificial homo politicus. Until then, discussions should be held as to the extent AI can be used in creation and implementation of the editorial line.
Although the editorial lines are internal matters of the publishing companies, there should be a balanced regulatory response as to the use of AI in news journalism. And this would not seem to be a threat to journalism and its freedoms, but rather its protection from potential business driven misuse of AI. For now, AI should remain a tool, not a journalist.