Artificial intelligence (AI) has made big waves in the digital asset management (DAM) space. With AI, DAM users can now use intelligence to help automate routine tasks like tagging, project management and content reuse. This has been a game changer for many organizations, freeing up time for people to use on more high-value work.
But AI isn’t always perfect, and sometimes, it can even make some truly bad “decisions.” There’s a wealth of examples and data out there showing that AI can be biased. In the past few years alone, despite the advancements in AI, we’ve seen problematic applications of AI like:
There’s also a huge amount of research around how IT organizations can combat this challenge in AI. While I won’t repeat all of it here, I encourage you to read what organizations like McKinsey and HBR are writing on these topics.
But when it comes to DAM and content intelligence, the stakes are also high. What happens to your brand integrity if your DAM’s auto-cropping shows bias in which types of faces it chooses as a point of interest? What if your AI auto-tags images and videos in an inappropriate manner? Or what if your AI impacts your search so that you only get a certain type of image or video returned as a result?
Fortunately, brands can take some simple steps to help combat potential bias in content intelligence.
1. Don’t Get Locked Into a Single AI
Vendors that provide AI services, like Microsoft, Google, Amazon, and IBM, are constantly innovating on AI. Most DAM vendors choose one of these vendors to run their AI behind-the-curtain. However, even with immense resources, some of these vendors accidentally program bias into their AI. By easily being able to use multiple AI services and/or swapping out AI services into your DAM, you can mitigate the risk of a biased DAM AI.
Related Article: IBM and Microsoft Sign ‘Rome Call for AI Ethics’: What Happens Next?
2. Take a Training, Not Just a Black Box, Approach
Many more sophisticated companies are starting to consider training their DAM AI. The main business purpose to do so is these companies are typically classifying, organizing and reusing content based on very business-specific criteria like product categories. However, a side benefit to consider when taking a training-centric approach is that your internal team can ensure bias isn’t accidentally being programmed into your DAM’s content intelligence.
3. Have the Right Team in Place
Lack of diversity in development teams is a key reason for the current bias we see in AI. Any organization building a DAM AI program should consider these facts when building their team out — not just for the developers and contractors who may have to train a specialized AI model, but even the librarian or system administrators who are ensuring high data quality within the DAM system.
What are your thoughts on DAM and AI? I’d love to continue the conversation on LinkedIn or Twitter (@AYakkundi)!
Related Article: DevOps and the Culture of Inclusion
Anjali is a product marketing director at Aprimo, and looks after the strategy, go to market, positioning, and messaging for the Marketing Productivity, Plan and Spend, and Digital Asset Management products. Prior to joining Aprimo, she spent 8 years at Forrester Research where she covered the marketing technology, ecommerce, and digital agency spaces.
social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart