Kathy McDermott
Managing Director, Research
Kathy McDermott has more than 25 years of experience in the financial services industry. Kathy, who joined Cutter Associates in 2011, has authored multiple Cutter Research reports on a wide range of topics. She spent ten years as a consultant for a variety of asset management firms, with a focus on business analysis for front- and middle-office projects. Kathy was previously the senior business analyst, equity trading systems, at Wellington Management, specializing in the firm’s proprietary electronic and basket trading applications. Earlier in her career, Kathy worked for Thomson Financial Services (now Thomson Reuters), supporting FirstCall and PORTIA clients in Hong Kong, Japan, Singapore, and Thailand, and later went on to manage PORTIA implementations. She also worked at LongView (now Linedata) as an account manager and then product manager of electronic trading. Kathy earned her bachelor of arts in mathematics from Hamilton College.
Recent research assignments and publications include the following:
- Alternative Data and the Expanding Universe of Investment Information
- Cutter Benchmarking: Data Management
- Cutter Benchmarking: Market Data Administration
- DataOps: In Theory and Practice
- Enabling Data Analytics
- The Evolving Front Office Support Model
- Evolving Data Governance: Building a Data Culture
- Making Snowflake Part of Your Modern Data Platform
- Managed Data Services
- Market Data Administration
- Order Management Systems
- Outsourced Trading: Has the Time Come?
- Reference Data Management Solutions
Alijah Poindexter
Research Analyst
Alijah Poindexter is an experienced professional in financial services research and consulting, with a background spanning banking, healthcare, asset management, and fintech. He joined Cutter Associates in 2025, where he supports the firm’s research initiatives with a focus on research production and design, analysis, and content development. Prior to Cutter, Alijah was a senior research associate at Datos Insights, producing market research on commercial banking, digital payments, and healthcare payments. He also led multiple client consulting engagements, delivering strategic, data-centric advisory work. Earlier in his career, he served as associate editor at Bank Automation News, where he focused on banking coverage and industry events, and held program and research analyst roles. Alijah earned his BBA from Austin Peay State University in Clarksville, Tennessee.
Investment management firms have rapidly moved from experimenting with artificial intelligence to adopting and implementing it. However, as firms accelerate their AI use, many face a common challenge: establishing governance without slowing down innovation. Interviews with Cutter Research members and our survey data results show this challenge is widespread and remains unresolved. Cutter’s 2025 AI Governance member survey found that 41% of participants identified the fear of hindering innovation as a top challenge in scaling their AI governance.
Of surveyed firms …
41%
cited "fear of stifling innovation" as a top challenge in scaling AI governance.
Source: Cutter Research AI Governance member survey, March 2025
In Cutter’s AI Governance survey, nearly half of investment managers (47%) report having already established AI governance programs or practices, while another 27% say they are actively implementing them. At the same time, firms are working to move beyond simply experimenting with AI and to incorporate it into workflows that deliver meaningful value. According to Cutter’s 2025 Data Management Benchmarking survey, 58% of investment management firms use more than one Generative AI (GenAI) engine, but only 4% have well-established AI capabilities within the business, with consistent tools and proper support. Although firms aim to continue maturing their AI governance, they remain cautious about taking an overly restrictive approach and possibly deterring employees from experimenting or exploring innovative AI use cases.
Tips for Right-Sizing AI Governance
In 2024, many investment management firms maintained some form of “cease and desist” order until they could formalize their AI acceptable-use policies. In the aftermath, some firms feared they had missed a chance to be early adopters and to set the tone for AI innovation. Since that time, Cutter Research members have expressed their desire to have governance and policy in place to promote AI use rather than throttle progress toward goals. Establishing proper governance and guardrails can help build trust in AI across the organization. Governance also reduces risks by setting boundaries and making sure that AI stays within acceptable limits. While every firm will have a unique, nuanced approach, support from executive leadership is essential for establishing right-sized governance and a culture of innovation within a firm. In this article, we provide some techniques for framing AI governance to support AI experimentation and adoption while ensuring it is used responsibly and in a controlled manner.
Use Case-Driven Oversight
Firms should tailor oversight for each AI use case, applying stricter governance to higher-risk applications. Cutter’s research shows that firms vary in their governance based on factors like culture, risk appetite, and past experience. Typically, they assess each use case’s risk, regulatory impact, and ROI before deciding on oversight measures. For example, tasks such as summarizing research often require initial testing, proof of concepts (POCs), and citations, whereas decision-support AI demands audit logs and regular model reviews. Front-office AI may face closer scrutiny due to client requirements. Because explainability is crucial for establishing trust, autonomous AI agents need frequent human and/or AI monitoring, clear records of actions and data used, and robust escalation and shutdown protocols.
Human‑in‑the‑Loop Remains the Default
Despite rising interest in agentic and autonomous AI, our interviews with member firms indicate a consistent view that human accountability remains key to AI governance, as humans provide intuition, experience, judgment, and oversight. This emphasis aligns with survey results indicating that 96% of firms see risk control as the main goal of AI governance. For firms, human oversight currently is the most practical way to manage quality, explainability, and reputation risks. This is especially true, as GenAI, unlike traditional software, produces nondeterministic outputs (outputs that differ despite identical inputs or prompts).
As some firms move forward with AI agents, most treat them just as they would junior employees or interns whose work needs to be reviewed and overseen, just like the work of any new employee. As trust builds in the work of a junior employee, responsibilities can grow, and firms can adjust accordingly. Firms can build trust in AI by implementing it responsibly and establishing appropriate guidelines from the beginning.
For more information on adapting existing frameworks for AI governance, see Cutter’s 2025 research, AI Governance.
Consider What You Already Have in Place
One of the top findings from Cutter’s AI Governance research is that AI governance is rarely built in isolation. Survey respondents reported leveraging existing enterprise governance capabilities. Our survey found:
- 56% say cybersecurity practices inform AI governance
- 44% cite data governance as both informing and being updated due to AI
- 48% link vendor management practices directly to AI oversight
Instead of creating new frameworks, firms extend familiar structures like data governance, vendor risk management, cybersecurity, data permissions, and model oversight to address AI risks. This approach reduces friction, makes AI governance feel routine, and builds confidence among business, tech, risk, and compliance teams. Many see AI as another tool and prefer to adapt existing governance with AI-specific policies within current frameworks, thus avoiding unnecessary complexity. Firms should rely on these existing frameworks since they are already aligned with governance, risk, and compliance requirements.
Auditability and Explainability
Firms often cite regulatory uncertainty as a brake on their AI progress, but Cutter’s research shows that requirements such as documentation, traceability, and oversight can actually boost AI adoption by increasing confidence. As confidence in AI grows, firms feel more comfortable deploying it more widely. Firms also become more at ease moving from pilot projects to production once they have inventoried AI use cases, logged usage, put tools in place to explain data inputs and outputs, and set up use case-appropriate guardrails.
For years, quant teams have used MLOps to train, validate, package, deploy, and monitor their models. They have a registry of their ML models and regularly monitor them for drift. Monitoring AI is not a new concept, but monitoring new types of AI, such as those used by business users rather than data scientists, requires different risk mitigation and oversight.
Auditability and explainability are essential and grow in importance as firms apply AI to higher-risk situations or agents. Firms should prepare for the coming regulations by maintaining audit logs and being able to clearly explain their AI’s data and outputs, and how both are used.
For more information on maintaining an AI inventory, see Cutter’s article, Do You Know What You Are Doing With AI?: Why Firms Should Prioritize, Develop, and Maintain an AI Inventory.
Managing ‘Shadow AI’
As much as firms wish it weren’t the case, it’s difficult to avoid “shadow AI,” the unauthorized use of AI. The reality is that today’s firms must focus on managing shadow AI rather than trying to eliminate it altogether. Firms succeed in this area by prioritizing visibility, acceptable-use policies, training, and tracking use cases to better understand risks, establish safeguards, and encourage responsible experimentation. The following are some techniques firms use to manage shadow AI:
- Provide AI training and ensure employees understand their individual responsibilities
- Create an AI inventory
- Update acceptable-use policies to explicitly address AI
- Restrict data types (e.g., PII, HR data, etc.), instead of banning tools outright
- Monitor usage patterns
This approach adds a layer of governance and allows employees to experiment with AI while recognizing they are ultimately responsible. Firms highlight that trust in employees and education serve as the best defense against shadow AI.
AI Governance for the Long Run
Over the past three years, we interviewed several of the same firms to understand how their use and governance of AI have evolved. We found that advances in GenAI tools have allowed them to move faster and achieve more than they initially believed possible. The testing they conduct now yields much better results than last year or the year before, underscoring the urgency of focusing on AI governance. Cutter’s 2025 survey found that 88% of firms plan to formalize or expand AI and AI governance training, signaling that education — rather than restriction — is the industry’s preferred approach.
Firms making progress in this area shared the following key priorities in scaling AI governance:
- Governance should develop alongside real use cases
- Existing governance frameworks should be reused whenever possible
- Human accountability remains explicit
- Auditability and transparency are essential
- Regulatory compliance and proper oversight should be seen as enablers, not obstacles
- Shadow AI should be managed pragmatically
In practice, investment managers realize that strong AI governance provides a framework for repeatable, defensible, and scalable AI. It creates a safe environment for AI experimentation and a clear pathway for expanded production use and innovation at their firms.
Cutter Research has issued an AI Usage survey. If you would like to see how your firm compares with peers, please reach out to us at [email protected] to participate in the survey.