Monthly Archives: October, 2012

Comparing data between Tabular Model Tables

On Friday I blogged about how to create a composite key to establish a relationship between two tables imported into a Tabular Model from different data sources.  My example used a report extract table containing sales data in a relational database and an Excel spreadsheet with sales projections.  After making the relationship, the first thing you’ll want to do with it is to compare the actual sales data with the sales projections to see if you are meeting sales targets.  Since projections were made by store and year, you’re going to want to compare them to the total sales by store and year.

Sales Difference Calculation

To make the comparison you have to write DAX code that sums the actual sales data and then subtracts the projection to determine by how much the actuals exceeded or under delivered against the projection.  To do this we’ll create a new column in the SalesProjections table and call it SalesProjDiff.  The column is defined using this DAX code:

=SUMX(FILTER(ReportExtract, ReportExtract[YearStoreType] = SalesProjections[YearStoreType]), ReportExtract[SalesAmount]) – SalesProjections[ProjectedSales]

This new column uses the SUMX function to sum sales by year and store type and then subtracts the sales projection for the same year and store.  To ensure that the correct sum is compared with the right projection value, make sure the filter uses the composite key you created that defined the relationship between the two tables that we called YearStoreType in each of the tables.  When this is done you now have a column called SalesProjDiff that users can access in client visualization and reporting tools.  This allows them to see how sales have done versus projections in the past, and also to measure how current sales are tracking against future projections.

I’ll cover more data modeling scenarios like this one using Tabular Models in my session at the PASS Summit titled Data Modeling Best Practices for Enterprise Tabular Models at 10:15 AM on Thursday, November 8.

Tabular Model Relationships

One of the best features of Tabular Models in SQL Server 2012 is that they can include data from multiple sources such as relational databases, cubes, Excel files, or just about anything else you can think of.  The logical data model of the underlying data is not dictated by this new version of Analysis Services, thereby allowing a Tabular Model to be developed with practically any existing data that’s available to you.

Faster Ramp Up

This makes it easier to get started with Tabular, especially if you’re working with a single data source that has a consistent data model.  But what about multiple data sources that are modeled differently?  For example, when most of the detail data you need is in a report extract table, but some budget data is in an Excel file.  To handle this properly and to avoid having a collection of disparate data in the Model, the data modeling process needs to be done after the import takes place and one of the ways to do this is to create the right relationships among the various entities.

Here’s the preexisting Report Extract table you’re working with.  Notice that it contains sales data by store, product, brand and date.

One of the people in the finance department has this spreadsheet, which has sales projections by store type and year for the past, present, and future.  You want report developers who use your Tabular Model to compare the actual to projected sales.

The two sets of data aren’t related so you can’t do the actual vs projected comparisons you need to do.  One problem is that there isn’t a unique key in either table that you can use to create a relationship between the two tables.  To solve this problem you’ll need to create a composite key from two columns in each table that will be used as a unique key in one of the tables.  This will allow you to create the relationship.

Add a column called YearStoreType to the Report Extract table that’s the concatenation of the year and store type columns:

=CONCATENATE(ReportExtract[CalendarYear], ReportExtract[StoreType])

Add another column, this time to the Sales Projection table, call it YearStoreType also (although the column names don’t have to match to be related), and define it this way:

=CONCATENATE(SalesProjections[Year], SalesProjections[StoreType])

Now create the relationship between the two tables, using the Sales Projections table as the lookup table.

Add measures like sums, average and any other calculations you need to have in your model.  When you deploy it and load it as a pivot table in Excel, you’ll be able to easily drag and drop the sales projections and compare them with the actual sales data for each year and store type.  This will show you why Tabular  Models, which do not require source data to be modeled in any particular form, forces you to define the data model within the Tabular Model itself so it to makes sense to report developers and Excel users.

To learn more about data modeling in Tabular Models, attend my session at the PASS Summit titled Data Modeling Best Practices for Enterprise Tabular Models at 10:15 AM on Thursday, November 8.

SQL Saturday #165 – Lincoln, NE

SQL Saturday descends upon Lincoln, Nebraska this weekend and it’s another good chance to get a day of free SQL Server training delivered by many of the top people in the business.  A total of 6 tracks at SQL Saturday #165 covering every part of SQL Server will be enough to get you up to speed on whatever topic you’re interested in.  It’s going to be held on the University of Nebraska campus and it’s one of the reasons I submitted to speak at this event.  I’ve really enjoyed the SQL Saturday events on campuses such as the ones I’ve attended at Baton Rouge on the LSU campus and Portalnd on the University of Portland campus.  The setting works really well for obvious reasons.

I’m presenting two sessions in Lincoln.  The first is titled Data Modeling Best Practice for Tabular Models and the second is Real-time Data Warehouse and Reporting Solutions.  What’s really cool is that both of these sessions are previews of the two sessions I’m doing at the PASS Summit next month in November.  If your interested in learning more about one of the great new features of SQL Server 2012, then come to my Tabular Models session.  It will show you how to effectively make use of existing data sources in Tabular Models for your Business Intelligence solutions.  The Real-time DW session will show you how to use the rich feature set of SQL Server to get a real-time data warehouse running successfully.

Data Modeling Best Practices for Tabular Models

The new Tabular models in SQL Server 2012 give us a powerful analytic engine that is faster and easier to implement than traditional multi-dimensional cubes using Analysis Services. In this session, you’ll see how to develop analytic models using your existing data sources and learn best practices for preparing data so that it can be effectively used as a Tabular database. In addition to creating the model, you’ll see how to quickly add measures, hierarchies, and calculated columns to provide a rich user experience and how the model works with Excel and Power View.

Real-time Data Warehouse and Reporting Solutions

In this demo-rich session, we will explore database and ETL architectures that let you smoothly move data from your application databases across the enterprise in real-time to your data warehouse. You will see how to use SQL Server features such as replication, Change Data Capture, SSIS, and other techniques to extract, transform, and load data fast and dependably. We will develop an actual sample system during the session and discuss and walk through alternative real-time architectures, including the use of Data Vault methodologies. We will then see how to use Reporting Services and Power View to present data with a minimum amount of latency.