The AI Advertising Podcast: S1
Episode 11
AI and Brand Safety: What Marketers Need to Know Now

About This Episode
As generative AI transforms the way we create and distribute content, it’s also reshaping the brand safety conversation. If you’re a marketer, media buyer, or brand leader trying to balance innovation with integrity, this episode is packed with insights you can use today.
Matt Shapo | Director of Digital Audio and Video, IAB
Connie Yan | Director of Platform Quality, StackAdapt
Transcript
Diego Pineda (00:00:00)
Today, brands are expected to show up everywhere their audience is, but the risk of showing up in the wrong place is higher than ever. From misinformation and synthetic content to cultural misalignment and inappropriate placements, brand safety has never been more complex. At the same time, generative AI and machine learning tools are helping us make sense of content at scale and build smarter filters. But how do we strike the right balance between protection and performance?
Today, we’re diving deep into brand safety in the era of AI with Matt Shapo, Director of Digital Audio and Video at the Interactive Advertising Bureau (IAB), who’s at the forefront of educating and mobilizing the industry around standards and innovation in ad-supported media. And Connie Yan, Director of Platform Quality at StackAdapt, where she leads a global team of brand safety analysts.
Let’s get started.
Podcast Intro (00:00:58)
Welcome to the AI Advertising Podcast, brought to you by StackAdapt. I’m your host, Diego Pineda. Get ready to dive into AI, Ads, and Aha moments.
Diego Pineda (00:01:14)
To start, I asked Connie how she defines brand safety in today’s landscape and how it has evolved in recent years.
Connie Yan (00:01:22)
In today’s ecosystem, brand safety is really about just making sure your ads don’t show up next to content that could harm your reputation. So, whether that’s hate speech, misinformation, violent content, or maybe just stuff that’s off-brand. But brand safety is also becoming more nuanced. So it’s not just about avoiding content. It’s about aligning with the right kind of content and finding that positive alignment for your brand values.
Diego Pineda (00:01:47)
When we talk about brand safety, what we’re seeing is a shift from simple binary decisions to brand suitability, which is a more customized, strategic approach to risk. Suitability acknowledges that not all brands have the same tolerance for risk or the same cultural touchpoints. What might be edgy for one brand might be the perfect environment for another. That means tools and processes need to be flexible.
Connie Yan (00:02:14)
Advertisers can kind of customize these brand safety tools to reflect their brand values and maybe even adjust them for different campaigns. So, for example, like a sports brand is probably okay with more edgy content and then a healthcare brand, prefers not to touch anything controversial. But I think what’s important is that tolerance levels shouldn’t be assumed. You know, once again, they’re like common sense things that brands want to stay away from, but as you know, risk tolerance can vary. So testing is key. And then I think, of course, like brand suitability isn’t always about like what you want to avoid. It’s about identifying positive alignment to the content. So that could be the tone, audience dynamics that really reinforce the brand’s values.
Diego Pineda (00:03:00)
From IAB’s perspective, brand safety is changing in today’s advertising environment. Here’s Matt Shapo.
Matt Shapo (00:03:07)
Like so many things in digital advertising right now, I would describe what has historically been referred to as brand safety and suitability as being in a period of really tremendous flux and transformation, right? I think a lot of what I’m seeing personally and what I think we’re seeing across the organization generally at IAB is that there is a very welcome focus on sort of greater transparency across all forms of media And most importantly, I think what we’re seeing both internally and within our membership and even outside of our membership is a general desire to take more of what you would describe as an inclusion-based rather than exclusion-based approach to analyzing media environments on the part of advertisers right? That I would say that is probably the biggest single trend or or shift that I’m seeing at IAB for a long time now and certainly other industry organizations and then you just many stakeholders across the industry the big conversation has been really for years now, how do we stop you know, walling off gigantic pools of potential advertiser inventory, you know, with sort of outdated blunt instrument keyword block lists that, you know, sort of over, overblock, you know, inventory that advertisers would actually really like to be a part of? And how do we get to more of a strategic surgical scalpel-like approach to making sure that sure, brands are still able to protect themselves from that extremely tiny percent of media environments where there really might be some risk to their brand reputation. But you know acknowledging that that tiny, tiny percentage exists, start to do a much better job of embracing the large percentages of inventory that are currently avoided because of these overly broad keyword blocking schemes that have really dominated the implementation and the practice of brand safety and suitability for so long.
Diego Pineda (00:05:12)
One of the most promising areas of innovation for brand safety is in AI. But how exactly are these tools being used to detect risky content and align ads with brand values?
Connie Yan (00:05:23)
We can first start with natural language processing. It’s really great for understanding text-based content. And within natural language processing, large language models are proving to kind of particularly be powerful. So really their ability to understand context, nuance, and even sentiment in text, they can kind of do this at a much deeper level. And this allows for more accurate identification of potentially brand-damaging content.
So this goes beyond simple just keyword matching. And then other technologies emerging like computer vision, that those help with detection of images and videos. And then machine learning kind of ties it all together. So, you know, learning learning these patterns and then getting better over time at spotting risky content for us.
Diego Pineda (00:06:13)
That kind of semantic depth is essential when you’re trying to differentiate between satire and hate speech, or spot emerging misinformation trends. And now with AI, we can spot that type of content faster and at scale. As director of digital audio and video at IAB, Matt Shapo has seen first hand the impact of AI on brand safety.
Matt Shapo (00:06:35)
One of the things that I’m very excited about when it comes to AI and brand safety is that in the area of podcasting, for instance, we’re starting to see an incredible amount of deep dive, granular analysis of podcast conversations because we have these transcripts that we can analyze. So AI is being deployed across the board in podcasting right now and increasingly embraced by buyers to really parse podcast transcripts and understand content at a level that allows much greater opening up of advertiser investment than had previously been considered. Because if you can actually use AI to sort of go through hundreds or thousands of shows and tens to hundreds of thousands of individual podcast episodes, then you suddenly have a brand new ballgame, because you’re no longer asking an intern to sit down and listen to every single podcast episode to make sure that it is safe or to make sure that it’s aligned with your particular brand. That’s not possible to do at scale when you’re talking about individual human beings. But when you can bring AI to bear and allow it to do all of the transcription analysis that a person can never do across hundreds, thousands, tens of thousands of shows, it makes it a lot easier to really understand where the opportunities are. And we’re seeing the results.
The returns are tremendous. Again, in the media center where I work at IAB, I’m watching a lot of brand advertisers go from a state of investing in let’s say two hundred, three hundred, four hundred podcast shows, now they’re investing in thousands of shows and it’s all brand new inventory that they would have originally thought was a little too risky for them but now they can actually verify that it’s not too risky for them because they’re using ai to go through these transcripts and understand what your content.
Diego Pineda (00:08:21)
Beyond this particular use case, AI is helping interpret meaning, tone, and visual cues, and then feeding that data into programmatic buying decisions in real time.
Diego Pineda (00:08:34)
That brings us to one of the core themes in both conversations: the importance of human review.
Connie Yan (00:08:40)
I think AI is definitely getting better, but it has caveats still. So, I mean, like things like satire are even misinterpreted by humans still, you know, cultural norms really can change quickly. And that happens, especially online. So AI still needs that level of human oversight and cultural input. And like the time relevance is really important for that too. You know, news changes quickly and what may have been called culturally relevant yesterday, could change the next day.
So I think what’s important to note is like we always go back to our roots here at StackAdapt, which is just human verification and, you know, eight years ago, we just started with a team here of essentially using humans to assess all content and determine what does look risky. So there are, of course, common sense things that every advertiser wants to block. And then over time, our team really has developed their expertise to kind of catch this. And that’s something that we still hold true to today. We just use tools to make this process more effective, more efficient, but ultimately, we really kind of rely on humans to make that final decision.
Matt Shapo (00:09:48)
There absolutely is still a role for the human dimension. And so I shouldn’t, I should be careful to add that even when advertisers use AI to help them do things like parse podcast transcripts for the purposes of finding suitable environments, they are nevertheless always keeping a human being. And so that validation that occurs is always sent back in a report to the advertisers. And they’re really able to understand, oftentimes through trick pixel tracking, that their ads really did show up in the episodes that they thought were going to be most suitable for them. So there always is human review. It’s just that by taking a human being and adding um you know the AI component, it makes them much more efficient much more productive and able to vet and analyze a much wider pool of content than they could have.
Diego Pineda (00:10:37)
Whether it’s through manual audits or collaborative work with DSPs and publishers, brands are recognizing that hybrid models offer the best of both worlds: speed from AI and judgment from humans. This is especially true for measuring effectiveness.
Connie Yan (00:10:55)
Effectiveness is mostly measured through accuracy metrics. Advertisers can run post-campaign audits, brand lift studies, and then they’ll monitor false positives and negatives. So over time, success would look like fewer incidents, really consistent brand alignment, and like a minimal loss of reach. But advertisers can also design messaging that thrive in specific environments. That way ads aren’t just protected, but they’re also optimized for resonance and return.
False positives mean you’re blocking good talk content and losing reach. And then false negatives mean bad stuff is slipping through. So, you know, really what’s the solution here is a really strong feedback loop. Advertisers need to audit, they need to flag errors, and then let AI learn from their mistakes. So, you know, a hybrid model is really best here. It’s AI plus human review. That’s really still the best setup.
Diego Pineda (00:11:54)
I asked Matt what the biggest knowledge gaps are regarding brand safety and what IAB is doing to educate the industry.
Matt Shapo (00:12:01)
One of the important conversations that we had there over the last several months of last year and in the first couple of months this year is what can we do as an advertising community, to allow advertisers to feel more comfortable investing in a genre, namely news, where we know there’s incredibly engaged, highly attentive audiences, right?
Many people in the advertising community are hesitant to invest in news because there’s sometimes hard stories that are there. There’s sometimes you know difficult topics. And there’s this longstanding assumption that, oh, I don’t want to be there because there might be a negative association if somebody’s reading a controversial or a hard news story with my brand. And so one of the things we talked about with Pover Council is we all know that actually, many times, super consumers of news not only don’t penalize brands in their minds for being a part of the news environments where they’re consuming content, they sometimes actively support those advertisers because they appreciate their support of these environments and even if there’re more of a neutral approach what often happens is that they’re more engaged. You know, we had folks like Jack Marshall from Double Verify on talking about how internal Double Verify data indicates that when you do actually advertise in the news, they’ve noticed that there tends to be 10% higher engagements. When you advertise in the news they’re seeing got anywhere from 8% to 9% better performance on KPIs across the board. This kind of education, you can never do enough of it.
Diego Pineda (00:13:26)
So, where do we go from here? What does the future of brand safety look like as AI continues to evolve?
Connie Yan (00:13:32)
AI is really bringing a whole new level of sophistication, I think. Looking ahead, I think we’ll to start to see even more advanced AI tools that can actually start to understand deep cultural nuances and like even help brands define what brand safe really means to them. However, I don’t sophistication is not the only goal. What really matters is how these capabilities get applied and whether it can be done transparently in places where media actually runs. So technology is here. So the next phase is really operationalizing it and translating that capability into like consistent and visible results for advertisers.
Diego Pineda (00:14:20)
Let’s wrap up with some key takeaways:
First, brand safety is no longer binary. It’s about suitability, and that means tailoring controls to each brand’s values and tolerance for risk.
Second, AI enables scalable detection. Tools like NLP, computer vision, and machine learning help flag risky content faster and more accurately than ever.
Third, human oversight is essential. Cultural nuance, satire, and emerging social trends still require a human touch.
And fourth, feedback loops improve effectiveness. Audits, measurement, and collaboration between brands, DSPs, and publishers help refine AI models and campaign strategies.
Before we go, I invite you to check out StackAdapt’s Resource Hub for more content about brand safety and AI in advertising. You can find the link in the show notes.
Podcast Outro (00:15:13)
Thanks for listening to this episode of The AI Advertising Podcast. This podcast is produced by StackAdapt. Visit us at stackadpat.com for more information about using AI in your advertising campaigns. If you liked what you heard, remember to subscribe, and we’ll see you next time.