Search icon Facebook icon Twitter icon Linkedin icon Google+ Icon Youtube Icon Flickr Icon Mobile nav icon White telephone Icon Email icon Down arrow icon vCard icon Thumbs up icon

Blog

Developing a Financial Analytics & Reporting engine with Azure Synapse

February 7, 2022
Azure synapse network visualised - feature image

We’ve recently had the pleasure of working with Microsoft’s Azure Synapse application, the latest version of Microsoft’s integrated Data Warehouse/Data Platform application, on a client project.

Our Director of Professional Services, John Gamble, gives his experience with using it for the first time:

I’m happy to report the project has now gone into the production environment and the client has a lovely new Financial Analytics and Reporting engine.

The role Azure Synapse plays in this scenario is that it provides the link between the source application, in this case Dynamics Finance & Operations (F&O), and the analytic layer provided by SQL Server and Power BI.

As great as it is to put a new system into a Production environment and watch the data flow in, knowing that it is making a real difference to the reporting and analytic capabilities, that’s not the point of this article. What I wanted to talk about was two things:

Greater business reporting flexibility

The first thing is that Azure Synapse gives you more options for reporting and being able to respond to business needs.

The finished application is a “classic” modern batch loading Azure Data Platform whereby source extracts are landed into a Data Lake as CSV files.

These are then exposed as “SQL-style” connections in a Serverless SQL pool within Azure Synapse which is then the source of ETL pipelines feeding data to Power BI data models.

We use a SQL repository to handle slowly changing dimensions and build out a series of Kimball fact/dimension star schemas, which feed data to Power BI models. But it’s the ability of Synapse to create a “live” interface to the Data Lake that makes a real difference.

Consider the following scenario, currently we’re batch loading the platform as the data is a Dynamics F&O Data Lake export and is running every six hours.

At some point Microsoft will change the export to be continuous. Raw data will be “trickle fed” into the Lake as it is updated on the source system.

Synapse provides the capability to then perform SQL processing directly over these feeds automatically, which in turn means that data can be made available to downstream reporting tools pretty much as soon as it lands in the Lake.

If you post a financial journal, update a customer record, change an address, add a new product, in fact update any part of an application, then in theory, those records are available straight away for your analytic/reporting needs – or at least as soon as they hit the Lake.

Azure synapse trickle feed diagram

Figure 1: Dynamics F&O Reporting Application Architecture

This is an extraordinary capability for engineers, and whilst it isn’t a pure real-time streaming “Kafka” model, it does go some way to solving one of the inherent challenges with Data Warehouses.

These are namely that whilst the outputs are very flexible – think cubes, dimensions, hierarchies, etc., the inputs are very inflexible and rely greatly on the skill of the Data Engineers to produce an efficient set of ETL pipelines.

What this gives you is more flexibility, through Azure Synapse the data can be made available to your reporting models and analytic tools either “Live” (shown in green), or via batch processing (shown in red), if further processing is required prior.

Lake storage is incredibly cheap and with Azure Synapse there is a way to easily work with, explore and query raw data files held in CSV, JSON or Parquet formats as shown below.

Azure Data Lake SQL Script

Figure 2: Querying raw CSV files in Azure Synapse Workspace

SQL Management studio query screenshot

Figure 3: Performing the same query in SQL Management Studio over the same CSV files… extraordinary…

Working with Microsoft

The second point I want to make is that on this engagement we had a real challenge in how the architecture was going to work and in a couple of cases there was a bit of a leap of faith that functionality required by the application and scheduled for delivery by Microsoft was going to be in place.

Synapse is new, but even newer is the Data Lake Export functionality as part of Dynamics F&O. In fact, I’m delighted to say that Microsoft were happy to turn this on for us in a Dynamics F&O Production environment so that we could complete the project.

We worked closely with Microsoft on this project, they knew what we were doing and were keen to see it work.

Our Partner Manager helped connect the dots within Microsoft and opened the right doors to get us to connect with the Dynamics F&O Product & Engineering teams, and this was invaluable.

We are lucky to have those connections and a lot of people took time to help get this architecture over the line.

Overall Azure Synapse is new, impressive and has made a real difference on this project dramatically simplifying the access to the raw data exported by Dynamics F&O.

We plan to use it again in another project, but overall, the use of Synapse gives you:

  • More options to ingest and process and query data.
  • Be less restricted by technology. Once the data is in the Lake you can work with it.
  • More options to bring in data at different schedules.
  • More ability to collaborate.
  • Can work outside the confines of the more “rigid” on-premise data warehouse.

All of this provides an increased ability to support the business.

We look forward to working further to help this Data Platform grow leveraging data directly from Synapse in reports.

For more information about how these new announcements can help you make the most of your data, email us at enquiries@c5alliance.com or call +441534 633 733.