Blog Archives | Shield: Digital communication governance and archiving solutions https://www.shieldfc.com/resources/blog/ End-to-end digital communications compliance platform Thu, 15 May 2025 14:20:56 +0000 en-US hourly 1 https://www.shieldfc.com/wp-content/uploads/2024/06/cropped-Favicon-Orange-32x32.jpg Blog Archives | Shield: Digital communication governance and archiving solutions https://www.shieldfc.com/resources/blog/ 32 32 Trust is the new architecture https://www.shieldfc.com/resources/blog/trust-is-the-new-architecture/ Thu, 15 May 2025 11:43:51 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2960 Shield’s CISO shares why trust must be engineered into every layer of modern SaaS—blending agility, security, and continuous validation to meet rising risks and expectations

The post Trust is the new architecture appeared first on Shield: Digital communication governance and archiving solutions.

]]>
The stakes have changed—they’ve been changing for more than a decade. With every headline about a data breach and every memo from a global financial institution demanding stronger controls, the explosion of SaaS and AI has created an innovation landscape that is as vulnerable as it is fast-moving. 

Regulators, customers, and major financial institutions continue to publicly raise the bar on what “secure by design” really means, it’s clear that compliance checklists and static controls no longer cut it.  

As someone responsible for both innovation and risk at Shield, I believe agility—when structured correctly—isn’t a security tradeoff. It’s a multiplier. In a world where systemic risk often flows through the supply chain, security should not be an overlay, but a design principle embedded in culture, code, processes, and cloud infrastructure. 

Companies shouldn’t secure data because the industry says they should—but because trust is the cornerstone of every relationship. 

Agile by intent, secure by design 

At every stage of my career, I’ve made security the foundation. 

It is not just a department, it’s a culture. It should be woven into the fabric of everything we do—every team we have. Our agile structure lets teams move fast and stay current—an essential edge as threats evolve daily. 

With a holistic view on security in all aspects, I’ve maintained a security design signoff prerequisite step of any feature design or development. By going further, security is a core discipline within R&D, establishing a baseline where developers write secure code by instinct, not instruction. That’s the outcome we care about. 

When we build, we build with ownership. Every engineering decision reflects an understanding of impact, risk, and trust. This security-native thinking enables the kind of execution precision larger firms often struggle to achieve. 

Third-party proof, not just internal confidence 

Anyone can say they’re secure. I believe you have to prove it. 

From internal testing to external validation, security isn’t just a posture, it’s doing the work day in and day out to maintain it.  

Our technology is tested by the world’s most respected cybersecurity firms, including Deloitte. These continuous penetration tests go beyond surface scans and dig deep into real-world attack scenarios. At the same time, our internal operations are audited to SOC 2 Type II standards, validating that our security practices are not only in place but actually work—consistently. Our Secure Software Development Lifecycle (SSDLC) goes beyond checklists. It starts before the first line of code is written and extends across the entire lifecycle. 

This dual validation—from technology to team—is our way of saying: Don’t take our word for it. 

Security by architecture, not just policy 

In 2025, the organizations earning the most trust aren’t the ones with the biggest infrastructure—they’re the ones that treat security, compliance, and scale as first principles, not afterthoughts. Legacy thinking says archives belong in the basement. Modern resilience means architecting for visibility, speed, and control from day one. 

I’ve made sure we threat-model every feature: We instrument every pipeline. We use active tooling that halts unsafe code and gives developers real-time feedback. It’s not just DevSecOps—it’s continuous, contextual, and deeply integrated. 

We: 

  • Monitor how code is structured 
  • Analyze cloud infrastructure definitions 
  • Validate identity and authorization flows 
  • Embed live feedback loops from runtime behavior 

Data is sacred—and guarded accordingly 

In today’s digital economy, customer data is both the crown jewel and the crown risk. Treating it as sacred isn’t just a compliance statement—it’s a cultural one. The organizations leading the charge are the ones who recognize that true data stewardship requires more than perimeter defense; it demands continuous, internal accountability as well. 

Our access policies are governed by a “just-in-time, least-privilege” approach. This means only the right people, at the right time, and only for the exact task required—no more, no less. Every access is logged, audited, and automatically expired unless revalidated. No exceptions. Most importantly, customers remain in control: Every safeguard we implement supports transparency, accountability, and compliance with the highest regulatory standards. Data isn’t just protected—it’s governed with precision. 

Where trust is built 

Crises happen. Systems fail. Threats evolve. Resilience isn’t built in the moment—it’s rehearsed long before. In a world where disruption is inevitable, the leaders separating preparation from performative compliance are the ones who treat crisis playbooks as living systems, not shelfware. Many vendors concentrate on protection and often overlook crisis planning.  

Shield, however, maintains a living business continuity and crisis management playbook. These aren’t just documents for compliance—they’re action plans tested regularly to ensure our teams are ready when it counts. From cyberattacks to system outages, we’ve mapped, rehearsed, and prepared for scenarios our customers may experience. 

The SaaS landscape is changing fast, and so are the threats. Supply chain risk, identity sprawl, and AI-powered attacks are no longer emerging—they’re here. But so is the opportunity to lead. 

I believe trust isn’t won by scale. It’s earned through consistency, transparency, and execution. And as the industry raises its expectations, we’re proud to be among the companies leading that shift. 

Because in a world where software runs everything, trust should run the software. 

Learn more about our security here.

The post Trust is the new architecture appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Strengthening information barriers: Why it matters now  https://www.shieldfc.com/resources/blog/strengthening-information-barriers-why-it-matters-now/ Thu, 15 May 2025 07:04:20 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2961 The FCA’s latest bulletin sounds the alarm on rising M&A leaks and outdated information barriers. Discover why legacy controls fall short and how firms can adopt dynamic, AI-driven surveillance to meet growing regulatory demands.

The post Strengthening information barriers: Why it matters now  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
The FCA recently released its Primary Market Bulletin 54, it did more than offer guidance—it issued a clear warning. Strategic leaks during M&A deals are no longer isolated compliance breaches. They’re becoming systemic, and the regulator is taking note. 

This bulletin turns a spotlight on what many in legal and compliance functions have long known: Information barriers are not a sufficient control. You may say two people shouldn’t talk about x, but information barriers cannot sufficiently stop them from talking about x. In an environment where material non-public information (MNPI) can leak across voice, chat, or email in seconds, firms need to rethink how information control frameworks actually work. 

The stakes have changed—and so must the approach. 

The FCA’s red flags: What firms should take away 

The FCA’s bulletin doesn’t just revisit UK MAR expectations. It raises questions about the operational integrity of how firms handle sensitive deal information today. In particular, the regulator is concerned about: 

  • Strategic or negligent disclosures that appear to originate from inside deal teams 
  • Weak enforcement around “need-to-know” protocols across advisory and issuer networks 
  • A lack of auditable oversight mechanisms to detect, prevent, and respond to information seepage 
  • Culture and governance gaps that enable sensitive data to circulate too freely—and too informally 

When the FCA starts linking M&A leak patterns with enforcement risk, it’s not a time for incremental fixes. 

Rising risks in the numbers 

The FCA’s 2024 Suspicious Transaction and Order Report (STOR) figures reinforce the urgency behind Market Bulletin 54.  In 2024, 87% of all STORs submitted related to insider dealing—the majority linked to trading ahead of earnings announcements and M&A activity. Equities dominated the reports, while commodities, fixed income, and FX markets showed significantly lower volumes, raising concerns about under-surveillance and potential blind spots. 

The takeaway is clear: While equity surveillance appears relatively mature, non-equity markets like commodities, FX, and fixed income lag behind, suggesting blind spots in detection and reporting. In less-monitored asset classes, gaps can be even wider. Without comprehensive monitoring and active information barriers, firms risk missing critical threats—and exposing themselves to growing regulatory scrutiny. 

Why legacy information barriers aren’t enough anymore 

The way firms used to manage inside information—with restricted lists, firewalled teams, and manual compliance checks—still leaves the space for leaks wide. Even with surveillance ad hoc reviews based on lexicons does not provide preventative or adequate control. Digital collaboration and hybrid working models have blurred boundaries and made static controls feel increasingly performative. 

The numbers from the 2024 STOR report show that insider risks are rising even in highly surveilled markets. As trading patterns become more complex and corporate activity increases, static information barriers leave firms exposed to faster-moving, harder-to-detect leaks. 

In practice, many firms struggle with: 

  • Visibility into how restricted lists actually translate across communication platforms 
  • Differentiating between permissible internal collaboration and boundary-crossing disclosures 
  • Retrospective reviews that surface issues too late to mitigate reputational or legal damage 

The result is an ever-widening gap between policy and practice—one that the FCA, and other regulators, are now pointing to explicitly. 

Now what? 

Getting ahead of this risk isn’t just about tightening controls—it’s about making them dynamic, contextual, and enforceable. Firms serious about preventing unlawful disclosures during M&A activity (and similar high-risk events) should focus on two core shifts: 

  • Automate the linkage between restricted entities and communications surveillance—including voice, email, chat, and collaboration platforms. 
  • Monitor both proactively and retroactively, surfacing misuse or unauthorized access for the entire time an individual or team remains on a “need-to-know” list. 

It’s this kind of dynamic enforcement that moves a firm from “we had a policy” to “we saw the breach, and we stopped it.” 

What good looks like: Bridging compliance lists and communications 

At Shield, we’ve seen the benefits of this firsthand. Our  Information Barriers model within Shield Surveillance was designed to operationalize the surveillance of sensitive information across eComms and voice. It connects compliance lists—watch, restricted, deal, research—to real-time alerts and review workflows. 

It doesn’t just enforce policy. It closes the loop. 

By scanning for risk across the lifecycle of a deal, the platform gives compliance teams the ability to detect and respond to potential leaks while individuals are still within the “need-to-know” window. Whether used proactively to prevent misuse, or retroactively to investigate, it provides the accountability regulators are demanding. 

Shield’s broader platform was also recently recognized by Gartner as a Visionary in the Digital Communications Governance and Archiving (DGCA) Magic Quadrant and a number 1 vendor in AI critical capabilities. One of the reasons cited: Our ability to operationalize AI for modern surveillance—and translate policy into actual protection. 

Rethinking risk 

Information barriers aren’t just a compliance concern. They’re a trust signal. They protect firm reputation, deal value, and client confidence. As the FCA sharpens its focus on information control, firms have an opportunity to get ahead—not just to avoid fines, but to build smarter risk cultures. 

In a world where leaks are no longer tolerated as inevitable, enforcement is no longer about the breach. It’s about the response. 

And that’s something every firm should be ready to show. 

The post Strengthening information barriers: Why it matters now  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
All female, all AI: Discussions around governance, ethics and innovation https://www.shieldfc.com/resources/blog/all-female-all-ai-discussions-around-governance-ethics-and-innovation/ Mon, 07 Apr 2025 12:04:54 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2904 Explore the insights from leading women in tech on AI governance, ethics, and innovation. This panel discussion dives into the real-world challenges and powerful impacts of AI in compliance, transparency, and diversity. The conversation uncovers the complexities and opportunities in AI's evolving landscape from experts in the field.

The post All female, all AI: Discussions around governance, ethics and innovation appeared first on Shield: Digital communication governance and archiving solutions.

]]>
When leading women in tech talk AI, compliance gets interesting—fast. Forget buzzwords and boardroom lingo. Our recent panel pulled back the curtain on how AI is actually being built, challenged, and put to work in the real world of risk and regulation. Spoiler: it’s not always pretty, but it is pretty powerful.

The discussion was moderated by Jess Jones, Surveillance SME for Thought Networks, and the panel featured Dr. Shlomit Labin, Shield’s VP of Data Science, Kay Firth-Butterfield, CEO of Good Tech Advisory, and Erin Stanton, the AI & Data Lead at Virtu Financial.

While you may have missed the webinar, we captured it for you below (and if you want to tune in, click here)

Transparency and self-governance in AI

ChatGPT supports 300 million weekly active users—and it was only launched 2 years ago. While this level of adoption presents countless opportunities, it also brings complex ethical challenges. Unlike traditional rule-based systems, today’s machine-learning models are notoriously difficult to govern.

Growing fast but with regulation still playing catch-up, all panelists agreed that firms need to protect themselves by updating their internal govenance strategies. Labin specifically urges users to put guardrails in place to address declining AI explainability.

GenAI results should be validated against external knowledge, or with more traditional technologies such as Google search. Compliance teams can reestablish governing power by prompting LLMs to explain chains of reasoning and comparing answers from multiple models.

Another concern raised is security. While there are no easy answer, the simple action of being transparent about data inputs and outputs can boost public perception. AI model-builders like Stanton are setting the stage for a more open-source industry by leading with transparency and publishing the data they use, but more importantly, the data they wish they had. She explains that calling out shortfalls encourages data sharing and helps to bridge gaps in datasets.

Firth-Butterfield notes that being aware of shortfalls extends beyond just the data provided and into the sustainability of the technology period. “LLMs are extremely thirsty for energy, consuming ¼ litres of water every time you ask a question”.

Being open about inefficiencies in technology leaves space for you to actively teach users how to navigate those areas – in this case, raising awareness of AI’s resource consumption reduces waste because users may opt to use a more resource efficient option instead.

AI’s role in Financial Services

AI’s ‘black box problem’ poses issues for compliance teams, especially in high-stakes industries like finance. Compliance teams must understand how decisions are made, and Stanton’s team at Virtu aids this by documenting every dataset inclusion and exclusion, as well as every algorithm used. They then share this information in layman’s terms to ensure everyone can understand and challenge the model’s logic. “Our compliance team loves that we’ve built this into every step of our process.”

Stanton explains that developers have an ethical and social responsibility to place guardrails inside of their models, as LLMs will learn bias from plain data if left unchecked. She makes a great point that model builders ultimately have responsibility over deploying models “even if I’ve spent a year building this model, if I don’t love how it works then I just won’t deploy it.”

For example, if a dataset lacks strong representation from a region like Asia, developers can block outputs for that geography to avoid unreliable predictions. 

The bias problem: Inclusion in AI

The panel didn’t shy away from discussing AI’s diversity gap. LLMs are trained on internet data, data that overwhelmingly reflects the perspectives of white males from the Global North. “Just the fact that this is an all-female panel,” said Firth-Butterfield, “helps to diversify the data pool.”

For people of color the rate of representation is even worse, as ⅓ of the global population isn’t connected to the internet, and therefore none of their data is represented.

And the challenge is getting worse. With increasing reliance on AI-generated data, we’re witnessing a phenomenon called ‘model cannibalism’ where AI models are being trained on their own outputs, compounding bias over time. It’s estimated that as early as the middle of 2026, there’ll be more AI than human-created data! The EU AI Act and other international AI policies aim to reduce risks stemming from these biases; for example, creating a risk profile around a person is now prohibited because vendors can’t ensure that their AI models will be free of bias.

Shield’s role in the future of AI

The panel agreed that while AI is revolutionizing compliance, the challenge lies in how we as a community govern its use. But with the right voices at the table, we can work towards a future where AI is inclusive, accountable, and free of bias.

We’re committed to creating space for important conversations to happen, because the future of AI isn’t just about models and data—it’s about people.

Watch the full webinar on-demand here.

The post All female, all AI: Discussions around governance, ethics and innovation appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Key learnings from SIFMA https://www.shieldfc.com/resources/blog/key-learnings-from-sifma/ Thu, 03 Apr 2025 07:09:44 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2897 A recap from SIFMA, a major compliance conference, highlighting key themes and insights from regulators navigating the evolving landscape.

The post Key learnings from SIFMA appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Last week more than 2,100 legal and compliance professionals descended to Austin, Texas. Here’s a recap of what Alex de Lucena, Shield’s Director of Product Strategy, took note of… 

Regulators back to business(ish) 

Considering the recent change in administration, a predictable chorus of “back to our traditional mission” pervaded—I mean, the actual words were repeated. Regulators emphasized charging bad actors while also signaling a shift back toward guidance-first leadership. (Read: The creative thinking that birthed Panuwat need not apply.) 
 
Mary Jo White dusted off her cross-examination skills, pressing a panel of regulators to confirm whether penalties would decrease under the new regime. None affirmed — how could they? — but a collective pause suggested she might be onto something. 
 
Meanwhile, in the hallways and bars, a different wind blew. Several regulatory panelists insisted that deeper stances would only emerge once other executive branches were fully staffed. Boilerplate evasion or the first signs of more structural changes? No one could say for sure. 

Tales from the crypto

Crypto got a more welcome, albeit still vague, reception. Regulators gave nods to its potential while reaffirming that enforcement remains grounded in time-tested violations — think Rule 2110 and its ilk. 
 
The SEC’s Emerging Technologies unit continues to focus on AI-washing cases, many of which conveniently intersect with digital asset promotions. But broader, consistent guidance around crypto regulation remains a work in progress. 

AI: On the other hand…

AI was presented as both promise and risk — with a strong preference for the present-tense, acceptable uses over speculative ones. One striking reminder: If AI speeds up legal workflows, the billable hour can’t pretend otherwise. 
 
A panel of surveillance leads highlighted compelling AI use cases
– Alert closure and QA of those decisions 
– A centralized U4 repository 
– Sentiment surfacing in written communications 
 
One panelist noted issues with emoji and multilingual coverage — both areas where AI has a clear use case. (FWIW, Shield offers native emoji and language coverage OOTB). 

What are off-channel comms again? 

SIFMA’s straight-faced panel laid out the full arc of off-channel comms enforcement. They traced the fines, the SEC’s dissents, and the more retooling of consequences — lighter monetary penalties, no ICCs, and no heightened supervision. 
 
The takeaway? The regulatory posture is shifting, but that doesn’t mean the pressure is off. The expectation is clear: Firms must proactively define and defend their communication boundaries. 

Bottom Line: 

The themes are familiar — enforcement, guidance, crypto, AI, off-channel comms. But beneath them is a regulatory community recalibrating. Not pulling back. Not pushing forward. Just shifting gears. 

The post Key learnings from SIFMA appeared first on Shield: Digital communication governance and archiving solutions.

]]>
From Noise to Knowledge: Contextualizing risk in financial communications with GenAI  https://www.shieldfc.com/resources/blog/from-noise-to-knowledge-contextualizing-risk-in-financial-communications-with-genai/ Mon, 03 Mar 2025 16:38:06 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2864 This blog explores how GenAI transforms risk management in financial communications. Discover a context-aware, 3-layered approach that fine-tunes model sensitivity, reduces false positives, and adapts to nuanced market language.

The post From Noise to Knowledge: Contextualizing risk in financial communications with GenAI  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Modern financial firms need to identify and surface risks when monitoring communications quickly. As firms increasingly turn to GenAI to bolster their risk management strategies, a new question emerges—how well does a GenAI model understand the nuanced context of your firm’s communications? 

More importantly, how well does a GenAI vendor understand the need to offer context behind a model’s output and accordingly tailor its development practices? 

Communication is complex and firms need models that can distinguish between casual conversation and potential red flags, understand the subtle differences in communication across various financial markets, and adapt to the ever-evolving language of finance. 

Context is everything when it comes to AI-driven communications monitoring and surveillance. Technology is reshaping the landscape of financial compliance and risk detection with a layered approach to building context-aware AI models. One thing is becoming very clear—the future of surfacing risk with GenAI lies in understanding data better and offering firms the flexibility to understand context of their outputs. 

The significance of context in GenAI for financial institutions 

Context is essential to understanding any form of communication—financial or not. Often, seemingly common phrases can have very different implications depending on the context. 

For example, the phrase “I really need a favor” might be innocuous in most situations, but in the context of a cross-border deal or in a market where the exchange of favors is less common than regulation might permit, it could be a significant red flag. 

Specialized jargon and market-specific language make a model’s task more challenging. For instance, in equities markets, sharing MNPI is strictly forbidden, while in energy markets, discussions about utilities or potential delivery delays are more common and not necessarily indicative of wrongdoing. 

Firm-based differences add another layer of complexity. For example, the way traders at one bank communicate might differ subtly from their counterparts in your bank. GenAI models need to be sophisticated enough to recognize these nuances without overemphasizing them or creating false positives based on regional or firm-based speech patterns. 

Furthermore, what one institution considers a potentially risky communication might be viewed differently by another, depending on its specific risk appetite and regulatory obligations. This is especially true with conduct-related issues where, at a glance, workplace complaints can take on more significance depending on a firm’s history with culture issues.  

And if these challenges weren’t enough, GenAI vendors must also account for multilingual  

communication when developing models. The model must not only accurately translate the content but also understand idioms, cultural references, and context-specific meanings that may not have direct equivalents in other languages. 

To address these concerns, GenAI vendors must offer firms the flexibility to define their own risk boundaries in output and fine-tune their models. 

One benefit of imposing more context on outputs is the dramatic reduction in false positives and noise. By distinguishing between genuinely suspicious activity and normal business operations, context-aware AI allows compliance teams to focus their efforts on real risks rather than wading through a sea of irrelevant alerts. 

Hand in hand with noise reduction comes an improvement in the relevance of generated alerts.  

When an AI system flags a communication, you can have greater confidence that it truly warrants attention. This improved precision stems from the model’s ability to understand the context of conversations, including market-specific jargon, regional language differences, and the subtle cues that might indicate potential risks. 

Perhaps most importantly, the ability to fine-tune model sensitivity allows you to strike the right balance between comprehensive coverage and operational efficiency. This customization ensures that the monitoring system aligns perfectly with your risk taxonomy and regulatory obligations

Having said all that, what does context-aware model development look like? 

A layered approach to building context into models 

A best practice for developing context-aware AI models in financial communications monitoring involves a 3-layered approach. This method provides a comprehensive framework for understanding and contextualizing communications, offering a more nuanced and accurate risk detection system. 

  1. The first layer involves ingesting messages, classifying, and tagging them. This layer doesn’t generate alerts; it simply tags and classifies the incoming data by looking at contextual information. 
  1. The second layer aggregates the information tagged in the first layer against specific risks. For example, it might consider whether secrecy language appears alongside specific trade talk or bragging (for instance). 
  1. The third layer uses GenAI to perform a comprehensive analysis. It can identify potential issues that the more targeted approaches of the first two layers might have missed. 

This 3-layered approach offers a level of flexibility and customization that’s crucial for financial firms. Unlike a one-size-fits-all model, the layered approach allows for more precise risk detection and reduced noise in alerts. 

One key benefit is the ability to provide rich context around why something is flagged as a potential risk. This detailed context helps compliance teams understand not just that a risk was detected, but why it was flagged, enabling more informed decision-making. 

The layered approach also allows for more nimble adaptation to different markets and communication styles. For instance, a model can differentiate how language is used in various contexts, such as the implications of “asking for a favor” in different markets. 

The 3-layered approach also helps firms tailor model outputs to their risk appetites. This customization allows you to adjust the thresholds for risk detection based on your unique requirements. An added benefit is that you can adapt to different regulatory environments and internal risk taxonomies without retraining models. 

Of course, much depends on the quality and relevance of the training data used. Specialized data helps the model understand context-specific risks that might not be apparent in general language models. However, for some models, such as ones focused on detecting secrecy, using open-source datasets with finance-specific prompts is the right approach. 

This balanced approach allows for the development of robust models without the need for extensive, hard-to-obtain financial datasets for every aspect of the system. It also enables the model to understand general language patterns while still being attuned to finance-specific nuances. Contextualizing risk is an ongoing process that requires continuous refinement. It requires a partnership between you and your GenAI providers to continually optimize your models based on real-world performance and evolving risk landscapes. 

Context is key to surfacing risk 

The importance of context in AI-driven communications monitoring is paramount, particularly in an industry where a single misinterpreted message could have significant regulatory or financial repercussions. 

Shield’s approach to this challenge exemplifies best practices in the field: 

  • Our 3-layered approach, combining initial classification, risk-specific aggregation, and comprehensive GenAI analysis, offers a robust solution to the intricacies of financial communication monitoring. 
  • Model flexibility that helps you define risk thresholds. 
  • Customization to your specific needs and maintaining an ongoing refinement process. 

When paired with model flexibility and a commitment to transparency, defining context to aid model output reduces false positives and lifts your surveillance program to new heights. 

Learn how AmplifAI—Shield’s GenAI toolkit—surfaces risks and offers unmatched context in model outputs. 

The post From Noise to Knowledge: Contextualizing risk in financial communications with GenAI  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Shield Expands Executive Leadership with Appointments of Tal Raziel-Yosef as Chief People Officer and Adi Yaari-Fubini as Chief Customer Officer https://www.shieldfc.com/resources/blog/shield-expands-executive-leadership-with-appointments-of-tal-raziel-yosef-as-chief-people-officer-and-adi-yaari-fubini-as-chief-customer-officer/ Wed, 29 Jan 2025 10:49:49 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2825 This blog explores the critical role of Model Risk Management (MRM) in assessing GenAI vendors for compliance. Discover how transparency, regulatory alignment, and robust documentation support financial institutions in mitigating risks and maintaining regulatory standards in AI-driven compliance solutions.

The post Shield Expands Executive Leadership with Appointments of Tal Raziel-Yosef as Chief People Officer and Adi Yaari-Fubini as Chief Customer Officer appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Shield, a leading AI-driven compliance and risk management platform for financial services, is pleased to announce two strategic additions to its executive leadership team. Tal Raziel-Yosef has been appointed as Chief People Officer, and Adi Yaari-Fubini joins as Chief Customer Officer. These appointments underline Shield’s commitment to fostering excellence in both employee and customer experiences as the company continues its rapid global growth.

This announcement follows Shield’s recognition as a Visionary in the 2024 Gartner® Magic Quadrant for Digital Communications Governance and Archiving (DCGA), a testament to the company’s innovative approach and leadership in compliance technology.

Tal Raziel-Yosef – Chief People Officer
Tal brings a wealth of experience in human resources leadership, with a proven track record of building high-performing teams and driving HR strategy in both startups and large technology enterprises. She most recently served as Chief People Officer at Dynamic Yield, where she supported the company’s rapid international growth, led its acquisition by Mastercard, and successfully managed the integration of global teams through transformative change. At Shield, Tal will focus on empowering Shield’s people-first culture, ensuring the company attracts, develops, and retains top talent worldwide.

Adi Yaari-Fubini – Chief Customer Officer
With extensive expertise in managing customer relationships and aligning product strategies with client needs, Adi brings invaluable insights to Shield’s customer-focused vision. Adi previously served as Chief Customer Officer at Priority Software and held leadership roles at ProQuest, Ex Libris, and Deltathree. In her new role, she will lead Shield’s global customer experience teams, driving customer success and enhancing client partnerships.

Shiran Weitzman, Co-Founder and CEO of Shield, commented: “We are thrilled to welcome Tal and Adi to our leadership team. Their diverse experiences in HR and customer service perfectly align with Shield’s mission to empower trust in financial markets. At our current stage of growth, it’s vital to have leaders who not only understand the complexities of scaling, but who also share our deep commitment to our employees and customers. Together with our incredible team, I am confident we will continue to achieve our vision of empowering financial institutions to operate with integrity and compliance.”

About Shield
Founded in 2018, Shield is at the forefront of compliance and surveillance technology innovation. The company’s AI-powered platform monitors, records, and analyzes over five million digital communications daily across financial institutions worldwide, enabling organizations to meet stringent regulatory requirements. Shield’s platform spans various communication channels, including emails, chats, phone calls, Zoom, Microsoft Teams, and more, detecting potential fraud, insider trading, and other risks early in the process.

Shield’s inclusion as a Visionary in the 2024 Gartner® Magic Quadrant for DCGA further validates its commitment to delivering cutting-edge solutions that empower financial institutions to stay ahead of regulatory demands while fostering trust and transparency.

Shield’s advanced capabilities also extend beyond fraud prevention, helping organizations build trust and foster healthier workplaces by identifying instances of harassment, bullying, or offensive behavior. With 160 employees across its Ramat Gan headquarters and offices in the US and Europe, Shield continues to expand its team, seeking talent across development, product, finance, operations, sales and marketing..For more information about Shield and its transformative compliance solutions, visit www.shieldfc.com.

The post Shield Expands Executive Leadership with Appointments of Tal Raziel-Yosef as Chief People Officer and Adi Yaari-Fubini as Chief Customer Officer appeared first on Shield: Digital communication governance and archiving solutions.

]]>
2025 Compliance Trends List  https://www.shieldfc.com/resources/blog/2025predictions/ Wed, 15 Jan 2025 14:45:22 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2822 This blog explores the critical role of Model Risk Management (MRM) in assessing GenAI vendors for compliance. Discover how transparency, regulatory alignment, and robust documentation support financial institutions in mitigating risks and maintaining regulatory standards in AI-driven compliance solutions.

The post 2025 Compliance Trends List  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
It’s the year 2025 and we’re all ready to see what’s coming this year. Just like last year, we’ve challenged our in-house compliance and regulations guru to peer into his magic 8 ball and offer up predictions that don’t look just like 2024’s trends.  

So here are the things you should be looking forward to (or avoiding: See last paragraph…) this year:   

  1. AI regulation – The use cases are growing and evolving, but what we’re still not clear on is how regulators view them. From leaving the majority of governance to compliance teams based on rather vague recommendations, this year we’ll see global regulators and legislators more clearly defining what we consider AI and the standards for usage.  
  1. AI and governance operations – As Spiderman says, “With great power, comes great responsibility.” We’ve got the power, now we need to define how responsibility works. This year teams will implement AI to become better and more operationalized from a governance, vetting and deployment perspective. This will be good as a lot of today’s hesitation with adoption ties back to a lack of standardization with how to properly onboard AI.   
  1. Regulatory tension – A more business-friendly US administration will inspire regulators elsewhere, perhaps moving against previous decisions. Whether at the US state-level, or other global regulators, a counter push will have the effect of making a wash of regulatory under- and over-reach.  
  1. Non-financial misconduct and ESG priorities muted – This past year we heard a lot about code of conduct compliance, but moving forward efforts at policing non-financial conduct may be relatively modest in comparison. Despite best efforts regulators will take a step back from explicitly attempting to affect company culture as a way to curb regulatory malfeasance. ESG-focused regulations will likely meet a similar, if more muted, fate.  
  1. Crypto ascends – Much hay has already been made of how the next year will likely mark a legitimization of digital assets in the eyes of regulators. Less clear is what this will mean in practice and how this will translate to surveillance challenges. Crypto has its own language many surveillance vendors need to catch up on. Financial crime controls will be more relevant. It will be interesting to see how coverage adapts to new asset considerations. 

As a bonus prediction, I have this sinking feeling we’re going to see a big financial scandal. I’m talking something LIBOR-level, ENRON-like—a financial scandal that’s bold and bad and peels back the curtain on gaps that were in plain sight. Like the hundred-year floods our ancestors used to look to, I’m starting to fret on an impending scandal that will drop some time toward the end of the year…so stay compliant out there!  

The post 2025 Compliance Trends List  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
The MRM imperative: Screening GenAI vendors for regulatory compatibility  https://www.shieldfc.com/resources/blog/the-mrm-imperative-screening-genai-vendors-for-regulatory-compatibility/ Wed, 13 Nov 2024 13:04:11 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2797 This blog explores the critical role of Model Risk Management (MRM) in assessing GenAI vendors for compliance. Discover how transparency, regulatory alignment, and robust documentation support financial institutions in mitigating risks and maintaining regulatory standards in AI-driven compliance solutions.

The post The MRM imperative: Screening GenAI vendors for regulatory compatibility  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
As more financial institutions (FIs) adopt GenAI models for compliance and surveillance, vendor transparency and model deployment flexibility have emerged as important prerequisites to adoption. Simultaneously, regulators are putting the onus on firms to ensure model outputs minimize concerns around accuracy, privacy, bias, intellectual property, and possible exploitation by threat actors. 

Model Risk Management (MRM) functions have become the key to helping firms meet these needs, and a vendor’s ability to meet MRM standards is critical. The question is, what are some good indicators you can look for when evaluating a vendor in this regard? 

The best way to evaluate a vendor’s suitability is to look at its commitment to maintaining transparency. However, this is a loaded term, with several variables affecting what the overall picture looks like. 

Perhaps the best way to think of these variables is to treat them as pieces in a puzzle. Evaluate the quality of each piece and you’ll build a picture of how strong a vendor’s commitment to MRM standards is.  

The foundations of MRM compliance  

Let’s begin by listing the different pieces you’re going to need to pay attention to. The most important ones to consider are: 

  • Transparency around model methodologies and underlying assumptions 
  • Robust change management controls 
  • High-quality documentation 
  • Regulatory alignment 
  • Implementation controls 
  • Commitment to ethical SDLC processes 
  • Robust security controls 

Now looking at each of these in more detail, transparency around model methodologies and assumptions is the place to start. Good vendors disclose model methodologies, their underlying assumptions, and data sources, allowing you to assess model appropriateness and identify potential biases or limitations. 

Look at the nature of the datasets the vendor used, its ability to investigate and explain output discrepancies, and explainability scores that help you understand model output. 

Change management is another critical aspect of transparency. A reliable vendor will have a clear change management process, communicating updates proactively and providing detailed information on how these changes affect model performance and outputs. 

Documentation is the next piece of the puzzle. Comprehensive documentation should cover the entire model lifecycle, from initial development through ongoing monitoring. 

Some critical factors documentation must address are:

  • Model stability 
  • Input reliability 
  • Output consistency 
  • Potential risks and model limitations 
  • Strategies for identifying and addressing issues like upstream problems or model failures. 

Pay attention to how frequently the vendor updates their documentation and whether they emphasize the measurability of model performance—a key aspect of ensuring ongoing compliance with MRM standards. 

The next critical piece is regulatory alignment. Good vendors align themselves with regulatory concerns, both present and future, at every stage through to deployment. 

Examine a vendor’s controls and documentation around test sets, front-end visualization tools, and model tuning. In addition, look at how well a vendor helps you understand model output divergences from baselines—a critical standard regulators expect firms to be able to meet. 

A vendor’s partnership-based approach should extend from the initial research phase through to the final interaction with client data, providing start-to-end explainability of AI tools. 

Implementation is also a big part of the puzzle. Look for controls and systems that capture and report on model performance metrics. This might include, regular model performance assessments, automated alerts for anomalies and periodic reviews of model assumptions and methodologies. 

Vendor responsiveness to feedback, its commitment to ethical development practices, and the quality of its security controls are the final pieces. By prioritizing these elements, vendors help you satisfy regulatory requirements and create a robust compliance and surveillance program

Performance reporting 

While controls and documentation give you a good qualitative assessment of a vendor’s models, performance reports give you quantitative data. Robust performance metrics and comprehensive reporting are crucial to ensuring ongoing compliance.Ask your vendors for detailed insights into model performance, data integrity, and overall governance. These elements form the foundation of effective risk management and regulatory alignment. 

Data integrity is a central factor in model reliability. Vendors should offer comprehensive reports that detail issues that might hamper model outputs. Some examples include: 

  • Corrupted files 
  • Server reboots that may have caused data drops 
  • Encrypted messages that couldn’t be processed 
  • Oversized files that exceeded processing limits 

The ability to account for missing data and explain the reasons behind any data loss is a good indicator of a vendor’s commitment to transparency. Moreover, vendors should have processes in place to replay missed data in a timely manner and notify customers of output discrepancies. 

Building on this foundation, statistical performance reports provide crucial insights into model effectiveness and reliability. Some key metrics to look for are: 

  • Precision and recall 
  • Alert averages 
  • Usage drifts 
  • Performance drifts. 

Governance and reporting are the final pillars that ensure a vendor meets MRM standards. Vendors should offer comprehensive dashboards or APIs that allow you to pull relevant information and create your own reports. These tools should provide visualizations and insights at the infrastructure level, reports on significant volume drops (which could indicate potential issues), and overall model health indicators. 

The flexibility to access and analyze this data is crucial to maintaining oversight and meeting regulatory requirements. 

Navigating the future of GenAI-driven compliance 

The right vendor will not only provide powerful GenAI models but also equip you with the insights and tools needed to maintain confidence in their compliance efforts. In an era of increasing regulatory scrutiny, this level of transparency and performance insight is invaluable for managing risk and maintaining trust with regulators and stakeholders alike. 

As financial institutions continue to embrace AI-driven solutions for compliance and surveillance, the importance of complying with MRM standards cannot be overstated. At Shield, our commitment to flexibility and transparency lies at the core of our model development. 

Curious about how our flexibility, transparency, and commitment to rigorous model validation can impact your communications surveillance program? Learn how Shield’s AmplifAI is a force multiplier for compliance teams.  

Discover the 5 essential questions to ask when evaluating AI compliance vendors

The post The MRM imperative: Screening GenAI vendors for regulatory compatibility  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
Open Books and AI Vendors: Transparency’s Role in Model Validation https://www.shieldfc.com/resources/blog/open-books-and-ai-vendors-transparencys-role-in-model-validation/ Wed, 16 Oct 2024 13:24:02 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2771 This blog discusses the importance of transparency in AI model validation for financial institutions, focusing on how vendors can ensure ethical practices and compliance with regulatory guidelines.

The post Open Books and AI Vendors: Transparency’s Role in Model Validation appeared first on Shield: Digital communication governance and archiving solutions.

]]>
As financial services firms have adopted artificial intelligence (AI) for compliance and surveillance, the rapid rise of GenAI in other industries has raised questions of its potential in kind. In turn, these developments have led to vendors selling sophisticated LLM and GenAI models to banks. 

While this shift is promising for overall AI development, it raises a critical question—how can you ensure that a vendor’s model outputs are properly validated? 

In the past, models used pre-defined datasets, making explainability and verification relatively straightforward. However, newer GenAI models are often trained on a vendor’s proprietary datasets, making output validation a complex process. 

Since vendors are unlikely to grant a firm access to proprietary data, examining its model validation processes is critical. After all, the stakes are too high to base decisions on the outputs of a model whose validation process is questionable. 

The importance of validated model results 

As AI’s potential to help compliance and surveillance within firms increases, the risks associated with poorly validated models loom large, especially when reviewing regulatory, business, and reputational concerns. 

The regulatory landscape for AI in finance is evolving quickly. Regulators like FINRA and FCA have issued guidelines around GenAI usage in compliance. The EU AI Act offers the most comprehensive set of guidelines for firms to follow. As regulators look to compliantly frame AI development, model outputs will come under greater scrutiny, making poorly validated results unacceptable. These pressures underscore the need for robust, compliant AI systems. 

Inadequately validated models can produce wrong decisions and false flags, creating blind spots in an institution’s monitoring and decision-making process. These gaps can result in missed opportunities or undetected risks, directly impacting operational efficiency and the bottom line.The ability to not only implement AI models but also to demonstrate their effectiveness and explain their decision-making process becomes crucial in this context. 

Vendor transparency is a crucial factor in maintaining the integrity of AI-driven compliance and communications monitoring processes. 

Good vendors distinguish themselves by their willingness to be open about their training methods and the statistical analyses they employ to validate their results. This transparency helps you verify the vendor’s results against your data—a step that is essential in ensuring the model’s applicability to your institution’s context. 

Moreover, this level of transparency facilitates a deeper understanding of the model’s strengths and limitations. The ability to fine-tune models based on a thorough understanding of their inner workings also addresses concerns around regulatory compliance and business risks.  

You can demonstrate to regulators that you not only understand the AI models you’re using but also can adapt them effectively. 

The question is, what should you ask your vendors and what does vendor transparency look like practically? 

The core principles of vendor model validation 

The best communications surveillance vendors do more than build models—they build governance and transparency from the ground up. That includes: 

  • Model explainability 
  • Model monitoring 
  • Good change management  
  • And balanced datasets. 

It’s important that vendors explain how their models were developed, the nature of the datasets they were trained on, and collaborate with you to clarify discrepancies in results. Model explainability is fundamental to transparency. Usually, explainability is a collaborative process between the vendor and the firm, with the vendor sharing statistical models that justify confidence in outputs and firms reviewing assumptions against internal data. 

A good vendor also tunes its models on your data and refines them based on your labels and annotations in output. Closely tied to explainability are model transparency, fairness, and interpretability. 

As regulators and stakeholders increasingly scrutinize AI-driven decision-making processes, vendors must demonstrate that their models are free from bias and discriminatory practices

Aside from ensuring ethical AI development, the following are key aspects of transparency and fairness: 

  • Tool development and use cases that address specific risk types mapped to industry regulations 
  • Internal charters and policies that explicitly commit to ethical AI development practices 
  • Explainability scores and ongoing reporting on AI-driven selections to enable decision-making interrogation 

Model monitoring and change management are the other critical components of responsible AI implementation for compliance and communications monitoring. While modern LLM models generate fewer false positives, rigorous change management processes remain essential. 

Look for vendors that implement changes only with explicit customer consent and thorough testing and validation. Each update should be treated with the same level of scrutiny as an initial implementation, allowing customers to test and verify the changes against their specific needs and internal controls. 

Lastly, pay attention to the kind of data a vendor uses. Synthetic data plays a valuable role in enhancing datasets and improving model performance. Reputable vendors understand that an overreliance on synthetic data can potentially limit the diversity and real-world applicability of their models. 

While synthetic data can be useful for augmenting datasets and providing additional learning examples, it should not completely replace real-world data, especially in the critical phases of testing and validation. 

The ideal approach involves a balanced mix, leveraging synthetic data to enhance model training while prioritizing real customer data for final testing and validation. This ensures that the AI models can effectively handle the nuances and complexities of actual financial communications and transactions. 

Charting a course for transparent GenAI adoption 

The future of finance is intertwined with AI and it’s a future brimming with opportunity for those prepared to embrace it responsibly. A truly valuable partner distinguishes itself through an unwavering commitment to transparency and rigorous model validation methods. 

At Shield, we embody these principles through: 

  • Comprehensive test sets that enhance customer understanding of our models’ operations. 
  • Custom model tuning using client data to ensure relevance and accuracy. 
  • Explainability scores for alert triggers, providing insight into model decisions. 
  • Thorough AI model documentation to support regulatory compliance efforts. 
  • A collaborative change management process that prioritizes customer consent. 
  • Strict adherence to all major AI guidelines including the EU AI Act, FINRA, and FCA regulations. 

Learn more about how Shield can boost your firm’s compliance monitoring processes through the innovative GenAI development of AmplifAI. 

Discover the 5 essential questions to ask when evaluating AI compliance vendors

The post Open Books and AI Vendors: Transparency’s Role in Model Validation appeared first on Shield: Digital communication governance and archiving solutions.

]]>
AI Regulations: A global journey  https://www.shieldfc.com/resources/blog/ai-regulations-a-global-journey/ Mon, 07 Oct 2024 09:42:45 +0000 https://www.shieldfc.com/?post_type=af-resource&p=2759 Discover how global AI regulations impact compliance, with insights from the EU, US, China, and more regions.

The post AI Regulations: A global journey  appeared first on Shield: Digital communication governance and archiving solutions.

]]>
As artificial intelligence (AI) continues to reshape industries globally, regulatory frameworks are being developed across regions to ensure ethical deployment, transparency, and accountability. The regulations are changing almost as quickly as the tools they’re regulating. 

But that doesn’t mean you don’t have to stay current or more importantly, make sure that your organization’s policies align. You need a way to keep up. 

That’s why we’re taking you on a journey around the world to explore how different regions are addressing the evolving challenges of AI and focusing on key regulations that are shaping the compliance landscape. Here we provide you with an overview of what we are seeing across different regions. 

European Union  

The European Union has been at the forefront of AI regulation, leading the charge with the EU AI Act which addresses policies and controls associated with AI, with firms being urged to keep pace and stay compliant. 

The EU AI Act classifies AI systems into unacceptable, high, and low-risk categories. High-risk applications must adhere to strict transparency and human oversight standards. The Act prioritizes ethical AI use and protecting individual rights. Many see this step forward as what the rest of the world will use as a framework for their own regulations in the future. 

United Kingdom  

In 2021 the UK published the National AI Strategy, which promotes innovation while ensuring ethical use. In 2024 the Financial Conduct Authority (FCA) declared the year’s focus to be on evaluating AI in surveillance.  Essentially, the UK is balancing innovation with regulation to ensure AI technologies are used responsibly across all industries. The policy applies stricter principles for the ethical use of AI in banking, focusing on governance, risk management, and operational resilience. 

United States  

AI regulations in the US have been more sector-specific, with guidelines tailored to industries such as finance and technology. Agencies like the Securities and Exchange Commission (SEC) and the Financial Industry Regulatory Authority (FINRA) have implemented rules to ensure transparency, ethical AI use, and accountability.  

US regulations are now evolving to address the increasing use of AI in surveillance, fraud detection, and decision-making processes. In mid 2024 the Department of Justice (DoJ) recruited its first Chief AI Officer, which indicates the US is preparing for both the challenges and opportunities that AI presents. 

Australia 

Early in 2024, the Australian Securities and Investments Commission (ASIC) called on financial institutions to strengthen their supervisory arrangements for recording and monitoring business communications to prevent, detect and address misconduct. And in July, ASIC released  Information Sheet 283 which directly responds to concerns around the use of unmonitored communication channels in business communications. 

Singapore 

Singapore’s proactive approach to AI governance positions the country as a leader in promoting responsible AI use in the Asia-Pacific region. Singapore’s Model AI Governance Framework outlines clear principles for ethical AI deployment, focusing on transparency and accountability. But, rather than issuing sweeping AI regulation that covers all industries, it is taking a sectoral approach, with individual ministries, authorities and commissions publishing guidelines and regulations 

China

China’s regulatory efforts focus on promoting ethical AI development while ensuring data security and privacy. The Cyberspace Administration of China (CAC) has established AI governance guidelines that emphasize accountability and transparency in AI systems. China’s regulations also highlight the importance of protecting personal data in an increasingly digital landscape, ensuring that AI technologies are developed and deployed ethically and securely. 

As the world turns… 

These AI regulations all share common themes of ethical concerns, privacy protection, and accountability. While some regions, like the EU, have more prescriptive regulations, others are focusing on establishing control and validation measures for AI systems. 

Ready to explore the evolving global AI regulatory landscape?

The post AI Regulations: A global journey  appeared first on Shield: Digital communication governance and archiving solutions.

]]>