Search This Blog

Sunday, June 5, 2016

OpenStreetMap and other map sources for Reporting Services maps

SQL Server 2005-2016

The map functionality in Reporting Services may seem a little bit outdated with all the nice and interactive fancy maps for example in Power BI. However, standard reports still play an important role for distributing information in an enterprise and so do maps in the reports. Microsoft Reporting Services relies on Bing maps as the tile layer for map reports. You can choose between different map styles (road, arial, hybrid) but you cannot use any other tile server (like OpenStreetMap for example). Since the map is loaded when the report renders, you will also need to have an internet connection for your SSRS server, which is usually the case. But in a recent project the requirement was to block the report server from connecting to external ressources and to use tiles from a GIS server in the local network instead.

If you want to import OpenStreetMap spatial data (points, lat/lon coordinates, polygons etc.) into SQL Server there are some very useful ressources that make this process very easy. You can start with osm2mssql and I also blogged about implementing the import process using Microsoft Integration Services. However, when it comes to using a different map tile provider for Reporting Services’ map tile layer, you’re more or less on your own.

So, opening reporting services to other map providers is an obvious requirement and there is also a Microsoft Connect entry for this. Feel free to vote for this requirement, however since it is open since March 2011 I wouldn’t be overoptimistic to see this feature implement in the near future. So this post is about alternative methods of opening SSRS to other map servers. The key idea is to use the SSRS functionality for a proxy server and to create a custom proxy that handles the communication to the map server.

When SSRS renders the report with a map, the map isn’t retrieved in one piece but divided in tiles as shown below:


In this example, our viewport requires four map tiles which are separately loaded by Reporting Services. Map tiles are usually pre-compiled images for all available zoom levels on the tile server which allows a very performant way to render a map. The tiles are usually stored by coordinates. You might have noticed the effect of the map being reloaded by tiles in other map applications, especially on a slow internet connection.

So, when Reporting Services displays a map with a Bing tile layer, it first computes the zoom level and all required tile coordinates, then fetches the tiles from the tile server. If a proxy is in place, the communication to fetch a single tile happens as shown below:


Since we use a proxy server here, the request is routed to the proxy. The proxy in turn forwards the request to the map tile server and then returns the resulting image back to SSRS. The setup of the proxy for SSRS is decscribed here and basically means to edit the web.config file in the ReportServer sub directory below the SSRS instance directory:


As you see, we’re pointing SSRS to a proxy on the same machine here, since we want to replace the proxy functionality with our own proxy as shown in the diagram below:


In this example, our custom proxy receives the request for tile r1202030320, where “r” means that we’re asking for a road map and the number is the so called quad code of the tile. The quad code is a very clever way of storing x/y coordinates and zoom level is a single number. Actually, decoding the quad code to x=532, y=342, zoom=10 is quite simple and well explained (including C# code) by Microsoft here. Our custom proxy can then use this information to make a call to a different tile server of your choice. In this case we’re using the OpenStreetMap based tiles from The only remaining task is to convert the image from png (as delivered from to jpeg (as expected by SSRS) and to return the image tile to SSRS. Using this method, SSRS actually believes the communication took place with, however our proxy server has transcoded the request for a different tile server.

Using this approach you can also implement an offline map support where SSRS doesn’t need to connect to the internet at all but gets all required tiles directly from the custom proxy server. For this approach you need to crawl the desired tiles at all required zoom levels from time to time and store the resulting tile files locally on the proxy server.


However, before you start downloading tons of tiles please check the terms of use of the individual tile server. Many tile servers that are based on OpenStreetMap offer free basic access (limited number of tiles per week/month) if you add a proper attribution (usually name/link to the service and to OpenStreetMap and its contributors). Especially, accessing OpenStreetMaps’ main tile server is usually not allowed (terms of use) as these servers are running on donnated ressources and have a strictly limited capacity.

So, how does the result look like? I have included some maps that show the location of cafés in the town of Düsseldorf (map zoom to some detail). All of the following SSRS maps were actually created using Reporting Services using the same report and showing exactly the some area. In fact I just reloaded the report with different proxy server settings in place.

bing This is actually the standard map based on bing maps tiles (no proxy in place) as being rendered by Reporting Services.
mapquest This screenshows shows exactly the same area as the map before (same report and report parameters in Reporting Services) but now we’re routing the map request to MapQuest.
Please be sure to read and understand their terms of use before accessing tiles from this service.
Data, imagery and map information provided by MapQuest, OpenStreetMap and contributors, ODbL
thunderforest-opencyclemap This tile rendering has been created using Thunderforest’s OpenCycleMap profile.
Again, please make sure you have read and understood their terms of use before using this service.
They also offer access for high volume tile requests. You can find the price for the different service level on their homepage. At the time of writing, usage is free for hobby projects up to 150,000 tile requests per month if the map is attributed properly.
thunderforest-mobileatlas Again this rendering has been created based on Thunderforest tiles but this time we used the “Mobile-Atlas” profile resulting in tiles with a very high contrast.
thunderforest-transport This is also a rendering based on Thunderforest tiles, this time with the “Transport” profile.

I cannot share the source code for the proxy itself here, however it’s not difficult to implement based on some lightweight proxy implementations. You can find a lot of links here.

So with this work around we’re able to use many more map providers in Reporting Services and even support offline/disconnected server scenarios. Personally I prefer bing maps (road maps) for reporting purposes as the maps are very tidy and clear. But now we have the choice.

Sunday, April 24, 2016

Why doesn’t my decision tree recognize a simple pattern?

Data Mining algorithms are sometimes perceived to be extremly intelligent and established with many super powers, humans don’t have. In today’s post I’d like to show a simple example in which a mining model fails to detect a very simple rule in the data. This illustrates the need for feature engineering not only to enhance the data with new associations but also to include simple calculations, for example deltas instead of absolute values.

I’m starting with a very simple pattern of two variables (a, b) and a single output (y):


As you can see, “Y” equals TRUE, if “A” and “B” are identical (boolean operation is “not xor”). So, this is a very simple pattern, which should be easily detected by a data mining algorithm. In order to use this data as source for an algorithm, I manifolded this pattern 50 times to result in 200 rows of data with this pattern. In R this reads as follows:

> newdata <- data.frame(
> data <-"rbind", replicate(50, newdata, simplify = FALSE))
> data$y <- as.factor(data$a==data$b)

Let’s start with a simple decision tree based on this data:

> fit <- rpart(y~.,data)

The result shows that no split has been detected in the tree or – in other words – the rpart model didn’t understand our simple rule:

n= 200

node), split, n, loss, yval, (yprob)
      * denotes terminal node

1) root 200 100 FALSE (0.5000000 0.5000000) *

Instead, all observations are treated as “FALSE” in our root node. Consequently, if we ask the model to do the prediction, it always responds with the answer “false”:

> cbind(newdata,predict(fit,newdata, type="class"))

  a b predict(fit, newdata, type = "class")
1 1 0                                 FALSE
2 0 0                                 FALSE
3 1 1                                 FALSE
4 0 1                                 FALSE

The reason for this behavior is easy to understand if you look at the decision the model has to make for the first split. Since our data cases are perfectly balanced, neither “A” nor “B” seems to have any effect on the output. For example, if “A” equals zero you find “Y” having 50% true and 50% false (see the table from above). The same situation is seens with “A” equals one. And the same situation is true for “B”. So the feature selection algorithm decides that none of our input variables has a univerariate influence on the output variable.

> table(data$a, data$y)

0    50   50
1    50   50

So the reason for our decision tree not detecting the pattern is the unnatural perfectly balanced structure of the data. If we use a random forest instead, the draws wouldn’t be equally balanced and the algorithm would detect the pattern:

> fit <- randomForest(y~.,data)
> plot(fit)
> cbind(newdata,predict(fit,newdata, type="class"), predict(fit,newdata, type="prob"))

  a b predict(fit, newdata, type = "class") FALSE  TRUE
1 1 0                                 FALSE 0.728 0.272
2 0 0                                  TRUE 0.264 0.736
3 1 1                                  TRUE 0.218 0.782
4 0 1                                 FALSE 0.752 0.248

Ok, this looks much better. And also for our decision tree, if we take a sample (75% of the data rows resulting in 150 rows for this example), the result looks much better. Here’s the resulting tree:


As you can see, the leaf nodes are “perfect” with no error. So our pattern has been detected in this case. Of course you wouldn’t expect such a perfect distribution as I constructed for this demonstration, but still this effect could be an issue for your model. As it is unlikely to have such a balanced distribution in real life, it is as likely that you’re dealing with more than just two input variables. So, let’s add a third variable which approximates “Y” with a success rate of 90%:

> data$c<-as.factor(ifelse(data$y==TRUE,ifelse(runif(n=nrow(data))<=.9,1,0),ifelse(runif(n=nrow(data))<=.9,0,1)))

Now “C” is a good but not perfect approximation of “Y”. Let’s retrain the tree using “A”, “B” and “C” as input variables. As you can see from the tree visualization below, the algorithm chose the “easy” way and used our approximation as the only split condition, instead of detecting the pattern (as in the case where “C” was missing):


As a consequence our true prediction rate goes down from 100% (pattern detected) to about 90% (approximation variable used). 14 out of 150 cases failed. Ok, this is still good but the point is, that we’re getting better results if we think about the way splits are calculated and therefore add additional attributes (in this case we could have added “A==B” as a helper attribute) to support the decisions. There are many more examples:

  • Instead of absolute values (temperature 26°, 28°, 32°) it might be better to add relative (delta) values (temperature delta 2°, 4°)
  • Instead of two absolute variables (energy consumption and workload) it might be a good idea to also add the ratio (energy consumption / workload)
  • Difference/Change of a value compared to the average (or a regression) of a given time range before (for example, relative change of a value compared to the linear regression of the last 3 hours).
  • As shown in the example above: for two variables it might be good to add a measure of similarity (equal or almost equal)


Data Mining algorithms like the decision tree sometimes struggle with simple patterns in data. Instead of feeding the data “as it is” into the algorithm, it’s often a good idea to add some derived information (delta, ratio, correlation, measure of equality etc.) to support the model.

Sunday, April 3, 2016

Using R Server in SQL Server 2016 to calculate the expected value for CRM leads (Monte Carlo method)

SQL Server 2016

In this post I’m demonstrating a use case for the new R Server (formerly known as Revolution R Enterprise) in SQL Server 2016 to calculate the expected value for CRM leads using a Monte Carlo approach.

To keep things simple, let’s assume we have the following leads in our CRM:


For example, the chance for winning lead number 7 with $1,000,000 is 10%. So what is the amount of incoming orders we can plan with (assuming the probabiltity for the individual lead is correct)? A common approach is to use a weighted sum (sum over probability times value), which is easy to calculate in T-SQL:

select sum([Probability]*[Value]) ExpectedValue from CRM


While this approach works well with a large number of leads of similar size, for the example above we have to realize that $100,000 of the $256,000 result from the relatively unlikely win of lead number 7. And in fact, we could win or loose this lead which means a value of 0 or 1 million but nothing in between. So this approach may be misleading with skewed data.

Another approach is to use a threshold and only count the leads with a probability above the threshold. The query would look somewhat like this:

select sum([Value]) ExpectedValue from CRM where [Probability]>=.7


Here we’re only counting leads with a probability of at least 70%. We just need to be sure not to understand the threshold of 70% as a probability here. It would be wrong to interpret the result in a way like “with a probability of 70% we can expect incoming orders of at least $52,000”. The reason is that each lead can be a win or loss independently from the other leads.

So, what could be a more realistic method to estimate the expected value of the leads from above? One idea could be to simulate cases where each lead can be converted in an order or not at the individual probability of the lead. If we run say 100,000 such simulations we can look at the distribution of the results to get a better understanding of the resulting total. This approach is called Monte Carlo method. While we could implement this in T-SQL (for example look at an older blog post of mine about Monte Carlo in T-SQL), it’s easier to do so in R and with the new R Server capabilities in SQL Server 2016 we can better use this to do the calculation (see here for the basics about T-SQL stored procedures in R).

Let’s start with the resulting procedure code before I go into more details:

EXEC sp_execute_external_script
@language = N'R'
, @script = N'
eval<-function() {sum(ifelse(runif(min = 0, max=1, n=nrow(mydata))<=mydata$Probability, mydata$Value,0))}
q<-quantile(r,  probs = c(0.1, 0.5, 1, 2, 5, 10,25,50,100)/100)
, @input_data_1 = N'select  CRMID, Probability, Value from CRM'
, @input_data_1_name=N'mydata'
, @output_data_1_name = N'result'
  [value] float
  ,quantile nvarchar(10)

The R script itself is marked in blue here. I runs 100,000 random experiments on our input data. In each experiment, 7 (the number of rows in our dataset) evenly distributed random values. Only if the random value is below the given probability of the lead (which happens more rarely the smaller the value of the given probability is) the value is accounted. We then calculate quantiles and return the result as a SQL table.

Here is the result of this T-SQL statement:


How do we read this result? Here are some examples:

  • Line 6: In 10% of the cases, the value was below $52,000 and, consequently, in 90% of the cases, the value was above $52,000
  • Line 2:  In 99.5% of the cases that value was above $15,000
  • Line 5: In 95% of all cases the value was above $37,000

Or, in other words, at a confidence level of 90% we can assume to result in a value of at least $52,000 here. So this approach does not only give a single value but allows you to understand the expected result based on a given confidence.

Of course, T-SQL would not be a good choice to develop and test even a small R script as the one above. Usually when working with R you’re following a more interactive approach. I suggest developing the script in an interactive R tool like RStudio. In order to do so, I’m using same simple wrapper code to provide the data set from SQL Server as shown below:

db_connection <-odbcDriverConnect(
  paste("Driver={SQL Server Native Client 11.0}",
         "Trusted_Connection=yes", sep=";")

mydata<-sqlQuery(db_connection, "select CRMID, [Probability], [Value] from CRM")


eval<-function() {sum(ifelse(runif(min = 0, max=1, n=nrow(mydata))<=mydata$Probability, mydata$Value,0))}
q<-quantile(r,  probs = c(0.1, 0.5, 1, 2, 5, 10,25,50,100)/100)



Again, the code in blue is the final R code which we can copy over to our T-SQL function for production. The RStudio environment allows us to interactively develop the script. The surrounding code loads the data into a data frame with the same result as in our T-SQL sp_execute_external_script call. The method used by SQL Server is much more efficient, however for testing purposes the ODBC call is sufficient.

For example, we can plot a histogram in R:



This shows the distribution of our cases. For example, there are no cases that end with a value of between 500,000 and 1 million. Or we could plot a density function for the distribution:

h<-hist(r, plot = F, breaks=1000)
plot(x=h$breaks[-1], y=(100000-cumsum(h$counts))/100000, type="l", ylim=c(0,1))




While  R is mostly known for machine learning and advanced statistical calculations it may also be useful for simple simulations like the one above where we analyzed the distribution of leads in CRM and calculated an expected value based on a confidence. Doing the same in T-SQL would result in quite a lot of SQL code which in turn makes it more difficult to read and understand the procedure (compared to our short R script). Another option would be to put the same code in a CLR library but then we would have to deploy the library separately instead of keeping the code in the database. However, developing the R code with SQL Server tools like Management Studio is not much fun. Instead, we used a short wrapper around the code to develop and test the code in an interactive R GUI like RStudio.

Saturday, March 12, 2016

A common pitfall with correlations in timeseries

Correlations are often considered an important measure to understand the underlying (probably hidden) patterns in data sets. But as with other statistical measures, a complex situation (many variables, many rows of data) is reduced to a simple numeric value which might be problematic. For example, Ancombe’s quartet shows four totally different patterns of data with the same median, variance, correlation and linear regression.


Source: Wikipedia ('s_quartet)

Many articles about correlations also focus on the difference between correlations with and without causality. For example, there is a correlation between the sales of ice cream and the number of cases of sunburn. Having such a correlation allows to estimate one fact (e.g. cases of sunburn) with the observations of another fact (e.g. sales of ice cream). However, there is no direct causality between the two which means that you cannot control one fact with the other. For example, if you prohibit selling ice cream, you will not influence the cases of sunburn. The correlation however is still valid because the two fact share the same reason (hot temperature in summer because of high solar radiation). So, correlations can be divided in at least three classes:

  • causality (A => B)
  • correlation with a common reason (there exists a C with C=>A and C=>B and as a result A correlates with B)
  • correlation without a common reason / accidental correlations (there doesn’t exist any C with C=>A and C=>B and the correlation between A and B is purely random and not reliable)

While the first two are a reliable source for detecting patterns in data sets, the accidental correlations might lead to wrong results. You can find a lot of strange and even funny correlations, for example on this website (divorce rate in Main perfectly correlates with the consumption of margarine). Another very prominent example is the correlation between media mentions of Jennifer Lowrence and the Dow Jones, which was explained by Tom Fawcett in his article “Avoiding common mistakes with time series”:



As a consequence, we need to avoid these false correlations. But do these correlation appear likely or are they a very rare observation? Well, the sheer amount of internet sources showing funny examples as the one above could indicate that false correlations are not unlikely to happen.

In order to investigate this, I used generated random walk processes based on normally distributed random values. Random walk processes are processes were the random variable is the delta of two points (not the absolute value of the point). Many real life processes can be described as random walk processes, for example temperature (for machine sensors or for weather data), birth rates, even sales (at least to some extend) etc., so these processes are likely to be seen when analyzing time series data.

The setup for the experiment was to generate 10 variables with 100 observations each. Here is an example:



Next, I calculated the correlation matrix for the 10 variables to find strong positive/negative correlation:


As you can see we have some series with a strong correlation, especially with a strong positive correlation (deep blue = series 6 and series 10 with a correlation of .83) in this example. Plotting the series with the strong correlation looks like this:


Of course, not every combination correlates well, for example series 6 and 8 have a correlation near zero. But in this example with just 10 variables we found 2 with a good correlation although the series were created randomly. Since this was just a single random experiment with no statistical relevance, I repeated the experiment 100,000 times (Monte Carlo). The random number generator was Mersenne Twister, normally distributed random variables have been computed using Box-Muller. Here is the distribution of the best absolute correlation between two variables in each experiment:


It may be surprising that such a majority of our random experiments ended with at least two series with a good correlation. The result shows that in 99% of the experiments the best correlation was at least .7, in 88% it was at least .8 and in 34% at was at least .9.

This means that with a confidence of almost 90% we can expect a correlation of .8 for this random experiment or in other words, it is very likely to find a correlation of 10 random walk processes (with 100 observations each). This also means, that when analyzing data you should not blindly celebrate any correlation being found but in the opposite be very skeptical about correlations in your data. You might argue that this effect is a result of our relative short time series with only 100 values (“rows”) for 10 given variables. However, repeating the test for 10,000 values per variable gives exactly the same result. The most important influence is the number of variables. While the correlation is rather poor with 2 or 3 variables, you can be almost absolutely sure to find a correlation of above .8 with 25 variables as shown in the table and chart below (black line: Correlation of .8 and above, the shaded area shows the range from correlation .7 to .9).

image image



Correlations of variables in data sets are often put on the same level with meaning. One may think to have found a hidden pattern when a correlation is detected. However, this post shows that correlations are not rare, but are very likely to be discovered even in randomized data sets with only 10 variables. As a result, we need to carefully examine each correlation we find in a data set. A first step can be to use a holdout and to test the discovered correlations on the holdout data set (much like we use training and testing data sets in data mining).

Sunday, January 10, 2016

Fill down table in T-SQL (last-non-empty value)

SQL Server 2012-2016

A common task in data processing is to fill null values in a table with the latest existing value. For example, look at the following inventory table


Let’s assume we only get an inventory record, if the inventory changes but we want to fill the missing values (for example to create a snapshot fact table).

The final result should look like this with null values being filled with the last existing value:


In Power Query (or “Get and Transform” if you’re using Excel 2016) this is an easy task using the fill-down function, which does exactly what want:


So, how can we get the same effect in T-SQL? We could use a sub query, but in this post I’m using a window function. The idea is to generate row groups for which each row group starts with a not-null value and has a unique identifier. This allows us to use the first_value function for each group to get the existing value for that group.

In order to build the row groups I start with a change indicator for the table values. The idendicator is 1 if we have a balance, 0 if not. The resulting query is quite simple:

select *, case when UnitsBalance is null then 0 else 1 end ChangeIndicator from [dbo].[Inventory]


The next step is to create a running total over the change indicator. In fact, this running total already defines our row groups. I’m using the sum() over window function here. The query from above is converted into a common table expression (cte) as this is easier to read:

help1 as (
select *, case when UnitsBalance is null then 0 else 1 end ChangeIndicator
from [dbo].[Inventory]

select *, Sum(ChangeIndicator) over (order by DateKey) RowGroup from help1
order by Datekey


As you see, the number for the row group increments with each existing balance value which means that the first row always contains the number (with the exception of the very first line for which we don’t have a value at all). In order to finalize the query, we just need the first_value function to get the value of the first row per row group:

help1 as (
select *, case when UnitsBalance is null then 0 else 1 end ChangeIndicator
from [dbo].[Inventory]
, help2 as (
select *, Sum(ChangeIndicator) over (order by DateKey) RowGroup from help1
select *,

case when UnitsBalance is not null then UnitsBalance
else first_value(UnitsBalance) over (partition by RowGroup order by DateKey)
end UnitsBalanceFillDown

from help2
order by datekey



So this is exactly what we wanted (if you remove the columns that we just needed during the process from the final output, you get exactly the result from above).



Windows functions in T-SQL are a good solution to fill gaps in your data set similar to last-non-empty.
As I’m using the first_value function here, just as a reminder to avoid frustration: first_value order by date descending and last_value order by date ascending do not give the same result.