
Lisa Masten
Consulting Principal – Data and Analytics
Lisa Masten has more than 20 years of experience in the investment management industry, leading projects and designing solutions in the areas of performance and attribution, data management, market data administration, portfolio analytics, and investment accounting. Lisa leads Cutter Associates’ Data and Performance practice, where she advises and designs operating models, selects systems, and implements business and technology solutions. She also organizes the Implementation practice for Cutter, including leading development of its adaptable delivery framework and project toolkit.
Prior to joining Cutter, Lisa was a Senior Manager at Invesco, where she managed a global team responsible for designing and implementing data, performance, and accounting solutions. She has held roles implementing and supporting processes and technology across the front, middle and back office at multiple asset management firms. Lisa holds a Bachelor of Arts in finance and computer science from North Central College in Illinois.

Kathy McDermott
Managing Director, Research
Kathy McDermott has more than 25 years of experience in the financial services industry. Kathy, who joined Cutter Associates in 2011, has authored multiple Cutter Research reports on a wide range of topics. She spent ten years as a consultant for a variety of asset management firms, with a focus on business analysis for front- and middle-office projects. Kathy was previously the senior business analyst, equity trading systems, at Wellington Management, specializing in the firm’s proprietary electronic and basket trading applications. Earlier in her career, Kathy worked for Thomson Financial Services (now Thomson Reuters), supporting FirstCall and PORTIA clients in Hong Kong, Japan, Singapore, and Thailand, and later went on to manage PORTIA implementations. She also worked at LongView (now Linedata) as an account manager and then product manager of electronic trading. Kathy earned her bachelor of arts in mathematics from Hamilton College.
Recent research assignments and publications include the following:
- Alternative Data and the Expanding Universe of Investment Information
- Cutter Benchmarking: Data Management
- Cutter Benchmarking: Market Data Administration
- DataOps: In Theory and Practice
- Enabling Data Analytics
- The Evolving Front Office Support Model
- Evolving Data Governance: Building a Data Culture
- Making Snowflake Part of Your Modern Data Platform
- Managed Data Services
- Market Data Administration
- Order Management Systems
- Outsourced Trading: Has the Time Come?
- Reference Data Management Solutions
Today's demand for data and self-service analytics comes from across the enterprise, especially the front office.

For years, firms have put together a patchwork of solutions to provide access to approved data in a safe and efficient manner, while end users have complained about not knowing where to find data, or lacking access to the data they require, in the form that they need it. Although investment professionals have used statistical analysis to explore data and inform their investment decisions, until recently only a small group of managers or quants used Python or R to perform complex data analysis on datasets they would assemble themselves. Today, demand for data and self-service analytics comes from across the enterprise, especially the front office.
A semantic layer serves many purposes and, depending on how it is deployed, can support different end users and solutions, including support for self-service analytics and providing an abstraction layer that limits disruption when modernizing your data architecture.
But what is a semantic layer and how does it work?
Traditional Semantic Models

Before firms can analyze data, and before users can create their own analytics, they need access to data in a format they can use. Traditionally, quants may have created their own data models (often in Excel) by extracting, normalizing, and blending data from multiple sources such as trading or accounting platforms, shared drives, or wherever else they could beg, borrow, or steal data from. Quants created a view of the data to answer the business questions they were addressing. This model they constructed is sometimes called a semantic layer.
Over time, IT would recreate these solutions using data warehouses and data marts. With these traditional tools, semantic layers were static and grew stale. They required a significant investment in time and development from IT resources to maintain and add new data sources to. New models based on old ones created a brittle, spaghetti-like mess of connections and dependencies. The result was that everyone was frightened to change anything for fear of breaking something.
Tools like Tableau, Power BI, and Alteryx help by allowing business users to build semantic models and blend new data sources. But the problem with these models is that they tend to have static data that can’t be shared with other tools because they are proprietary representations within those applications.
A Better Way Forward
A better way forward is to build a flexible semantic layer using next-gen tools that create a performant, virtual, and cloud-enabled semantic layer. Although a virtual semantic layer provides an abstraction layer that models data from various sources, the data is not moved or replicated. Depending on a firm’s data architecture, the use cases for data consumption, and where and how the source data is stored, this can look different. Some firms may implement a semantic layer to support self-service analytics. Others may create a semantic layer to act as an abstraction layer that minimizes disruption to the end users when changes are made to the underlying source systems while modernizing the data architecture or replacing a system.
Firms may use data virtualization tools such as Dremio and Denodo to create a virtualized semantic layer without duplicating or physically moving data to a centralized data repository. Data virtualization accesses data where it lives and contains no source data, only the metadata required to access each data source, plus security or governance controls. Data virtualization tools allow firms to create reusable semantic models that can be shared across an enterprise and enable reuse and traceability of the data source.
These modern tools expose the data through query languages such as SQL or Web APIs, allowing a vast array of tools to take advantage of the models. Data lineage is fully documented even as logic changes. Knowing how the logic is applied gives users confidence to make changes in the dependent models. These tools also allow firms to blend new datasets quickly without expensive IT resources. A semantic layer allows data to remain in the source formats (CSV, Excel, Parque) and locations, but also can expose it to business users to look like data in one consolidated database where they can use SQL or more business user-friendly tools to analyze it.
Five Benefits of a Virtual Semantic Layer
- Accessibility to different data sources in one centralized location
- The ability to mask data or present it in a transformed state, reducing the need for ETL/ELT development
- Providing an abstraction layer from actual data sources so changes can be made to the underlying sources without disruption to data consumers
- The ability to optimize performance and scalability without significant development
- The ability to quickly blend new data sources without IT involvement
Don’t Miss Out
A virtual semantic layer allows different end users who are using different tools to access the same approved data that may physically be stored in multiple locations. A virtual semantic layer also can help prevent users on different teams from creating their own siloed data models and ensures approved and trusted data is used across the enterprise.

Once a virtual semantic layer is created, then users across the organization can use the tools that best work for them ─ BI tools, Python, R, SQL, etc. ─ to connect to that same semantic layer to explore the data, regardless of where or how it is physically stored.

To learn more about self-service analytics that leverage the semantic layer, check out our best practices for self-service data analytics infographic.
To speak with a Cutter analyst or consultant about this topic, contact us at [email protected].