ABU members encouraged to embrace artificial intelligence
ABU members have been cautioned that if they don’t use artificial intelligence (AI) by 2030 they may become extinct. The advice came from AI expert Tracy Gillam to an ABU webinar on “New Developments in AI for Broadcasters”.
Gilliam, who is Chief Strategy Officer for AI company Futuri, said the field was evolving rapidly.
“It is a very exciting time to be in the radio industry,” she said, adding: “If we can use it [AI] to make positive impact.”
She agreed that many people’s attitudes towards AI had become more positive even over a few short months since the issue was discussed at the ABU CON-FEST and RadioSonic conference in July.
Gilliam said some people in radio saw AI as a way of saving costs with automated systems and artificial characters and voices, but this was a narrow viewpoint.
She said investment in AI enabled broadcasters to save money to re-invest it in improving specific area, importantly in what humans do best – creative thinking.
“People won’t be replaced by AI,” she said. “But people will be replaced by people who use AI.”
Gilliam took the more than 70 webinar participants through the advantages of AI, tools that are being developed and how organisations can restructure to work collaboratively with other experts in different areas of the new technology.
Throughout the hour, she constantly reiterated that AI did not replace the best of a radio station’s programming, brand or qualities, but enabled broadcasters to concentrate on what they do well to attract audiences, such as hyper-local news and information. AI could be used to perform more mundane, repetitive tasks to free up human creators.
The concept of creating artificial on-air personalities drew a lot of questions from webinar participants before Gilliam closed her presentation with a summary of the two biggest risks of using artificial intelligence.
The first was the risk of losing audience trust by not being open, honest and transparent about how the station was using AI in important areas that relied on human intervention.
The second risk was in assuming that because AI could generate material in a matter of seconds, it therefore did not need human oversight.
“Humans still need to go through it [AI-generated material] to check it and to make it human,” she said.