
Austin Coxhill
Senior Director, Consulting
Austin has over 20 years of experience with investment management Technology at leading investment firms. He has worked across front office & back office leading the development of global enterprise solutions. He has consistently helped teams leverage new technologies to automate, reduce costs and provide new business capabilities.
Prior to Cutter Austin spent time at Hulu/Disney as a Senior TPM leading data analytics initiatives across multiple product groups for the convergence of Disney & Hulu streaming analytics platforms.
From 2007-2020 Austin worked at Western Asset Management, an affiliate manager of Legg Mason with $450BN AUM. Austin held a senior technology leadership role responsible for the strategy, roadmaps and implementation of Enterprise Data, Cloud Migration, Dev Ops and Citizen Development. Prior to 2019 he held various lead roles managing teams responsible for the delivery of solutions for portfolio management, research, trading, portfolio compliance, back-office operations including confirmations, settlements, collateral management, TBA processing and regulatory reporting.
Prior to Western Asset he held the role of Director of IT and Plusfunds group, a Fund of Funds managing $3BN AUM where his team overhauled the technology improving operational efficiencies for NAV validation, Risk reporting & client reporting. Austin previously spent 5 years with Deutsche bank in New York & London on various enterprise technology initiatives including the replacement of legacy global equity settlement solutions, cash management systems and stock lending & borrowing.
Austin holds an HND in Business Information Technology from the University of Central Lancashire (UK)

Steven Longo
Senior Director, Consulting
Steven Longo is a senior technology and operations professional with over 20 years of experience in the financial services industry. He provides consulting on technology management, execution and change management, data management and operational practices, investment tools, technology architecture, IT infrastructure, IT controls, and business strategy. Steven’s experience includes leading operations and IT groups within asset management firms and media companies. He has led strategic and target operating model reviews, data management and operations reviews, systems implementations, infrastructure projects, cost and efficiency programs, team restructures, risk management efforts, IT due diligence assessments, and other activities to support investment teams. Prior to joining Cutter Associates, Steven was Head of Information Technology at Pendal Group Limited. Prior to Pendal, he was General Manager of Group IT Services for Seven Network Limited. He also held numerous technical and management roles at Schroder Investment Management, including Director and COO, and CIO in Australia, Head of IT for Schroder’s North American business in New York, and technical hands-on roles on implementations of portfolio management, unit registry, order management, and trading systems.
Steven holds a Bachelor of Science in Applied Physics and post graduate qualifications in Data Processing, Applied Finance & Investment, and Business Management.

Lisa Masten
Consulting Principal – Data and Analytics
Lisa Masten has more than 20 years of experience in the investment management industry, leading projects and designing solutions in the areas of performance and attribution, data management, market data administration, portfolio analytics, and investment accounting. Lisa leads Cutter Associates’ Data and Performance practice, where she advises and designs operating models, selects systems, and implements business and technology solutions. She also organizes the Implementation practice for Cutter, including leading development of its adaptable delivery framework and project toolkit.
Prior to joining Cutter, Lisa was a Senior Manager at Invesco, where she managed a global team responsible for designing and implementing data, performance, and accounting solutions. She has held roles implementing and supporting processes and technology across the front, middle and back office at multiple asset management firms. Lisa holds a Bachelor of Arts in finance and computer science from North Central College in Illinois.
Are next-gen pipeline tools truly emergent or just incremental improvements on existing capabilities?

Many asset managers have developed skepticism of the myriad modern data platform tools available today and the tangible benefits they promise to deliver over existing traditional data stacks. This extends to data pipelines, where there’s a view in some quarters that the emerging suite of next generation pipeline tools are functionally no different and essentially just repackaged versions for marketing purposes.
This skepticism is no doubt driven to an extent by past experiences with other emerging technologies that have failed to deliver on their promised hype. Are these next gen pipeline tools truly “emergent” ─ or just incremental improvements on existing capabilities?
Traditional Approaches to Moving Data
The primary goal of provisioning access to quality data in a timely and efficient manner still stands today. The fundamental purpose of data pipelines in moving data from a source to a destination also remains unchanged and can be thought of as the “glue’ that binds a data platform together. However, this basic data transportation process has developed over time into an expanded set of data processing activities that incorporate both operational and business logic to perform advanced data sourcing, filtering, cleaning, aggregating, enriching, transformation, and loading.
Over the past two decades, asset managers have built their data platforms using enterprise data management (EDM) or extract, transform, load (ETL) tools for ingestion and transformation. This EDM/ETL approach well suited the use cases at the time, when asset managers needed to extract structured batch processed investment data out of operational transaction systems for cleansing, merging, transformation, and loading into data warehouse tables for analysis.
Until now, these pipelines have served asset managers well and remain a good option for structured batch data sets. Therefore, in this context, questioning the value of next gen pipeline tools is understandable. However, in practice, many firms today wrestle with the same issues that prompted their initial investments in these EDM/ETL pipeline tools. So, what’s changed?
Drivers for Next Generation Pipeline Solutions
The basic need to ingest and transform structured investment data sets won’t be disappearing anytime soon. That said, business requirements continue to evolve, and new use cases have surfaced that increase the demand for processing more and varied datasets. Once again, this creates significant challenges and pain points for many firms’ data teams.
The adoption of cloud services has propelled this evolution. Increasing costs and specialist resource constraints associated with maintaining on-premises computing and storage have compelled organizations to replace their outdated legacy IT infrastructure by migrating to the cloud and using SaaS applications. The emergence of these multi-environment configurations (e.g., on-premises, cloud, hybrid, and multi-cloud) has spawned use cases requiring that asset managers source data from a proliferation of disparate locations.
Data sources and types have also extended past traditional transactional-based investment data, and the demand to ingest and analyze unstructured data has progressed beyond the realm of marketing. Today, a growing appetite for alternative data analytics now comes from the investment side. These alternative datasets (e.g., satellite weather data, geolocation foot traffic, customer credit card transactions, social, and sentiment data) require pipelines that can readily handle large volume, unstructured, real-time, streamed, and limited lifespan data.
Benefits of Next Generation Data Pipelines

So what distinguishes the emerging crop of next gen pipeline tools?
The biggest difference is that these next gen tools are changing the way pipelines have traditionally been built. Traditional pipelines have leveraged the same approach and technology advancements that have driven the evolution of software application development practices over the past 20 years. Modern data pipelines, by contrast, are built on scalable cloud-based architecture, leveraging the elasticity and agility of the cloud to readily scale and work around traditional on-premises infrastructure bottlenecks and to address the latency issues associated with processing real-time streaming and large unstructured data volumes.
Another key trend in modern data pipeline design is a change in where and when the transformation stage takes place. More agile data pipeline processes are supported through the adoption of ELT. This defers execution of transformation processes until after the data has been loaded into the data warehouse or data lake in order to take advantage of the increased scalability and performance of these modern cloud-based platforms.
Moreover, next gen pipeline tools support self-service management, allowing teams to easily create and maintain data pipelines, without needing the assistance of skilled IT professionals. These tools also can leverage a semantic layer, mitigating the need to move large volumes of data between sources and locations. And the tools support the use of simple declarative SQL statements to implement parts of the pipeline, democratizing data access, and avoiding workload backlog queues that teams typically experience with traditional pipeline development. This self-service management goes further by employing data observability tools to facilitate simplified monitoring and management of any pipeline problem.
Where to Next?
The pipeline’s basic purpose of data transportation and primary goal of provisioning access to quality data in a timely, efficient manner has not changed. However, data pipeline tools have significantly evolved in line with changing business requirements and the adoption of cloud services. The emerging next gen pipeline tools have been purposefully designed to address the latency, bespoke, complicated single-use builds, lack of automation and standardization, and reliance on specialized skills and knowledge pain points associated with traditional data pipeline development.
So, if your firm’s use cases have evolved beyond the need to ingest, transform, and load traditional structured batch processed investment data and into the realm of large volume, unstructured, real-time, and streamed data, these next gen pipeline tools will likely prove invaluable.
Looking at the bigger picture, modern data platforms also have evolved over the past decade. With the latest incarnation is the emergence of data fabric, a framework that aims to support automated, flexible, and reusable pipelines, while leveraging ML/AI capabilities.
Want to know more about what’s coming down the line with modern data platforms? Check out Cutter’s Data Fabric and Data Mesh: An Introduction whitepaper.