We just started using NewRelic at work and I’ve really been digging it. We also use SignalR which (by design) can use long-polling connections. Unfortunately, those tend connections to skew the metrics in NewRelic. Fortunately, NewRelic provides an API to ignore certain transactions, so I thought I’d be able to tell it to just ignore signalr.

At first, I tried this answer on StackOverflow, but it ended up ignoring all requests, not just the ones for SignalR. In the end I had to use an Owin module to get the job done.

Here is the code I ended up with

using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Diagnostics;
using AppFunc = Func<IDictionary<string,object>, Task>;

public class NewRelicIgnoreTransactionOwinModule
	private AppFunc _nextAppFunc;
	public NewRelicIgnoreTransactionOwinModule(AppFunc nextAppFunc)
		_nextAppFunc = nextAppFunc;

	public Task Invoke(IDictionary<string, object> environment)
		object request = null;
		if (environment.TryGetValue("owin.RequestPath", out request)) {
			if (((string)request).IndexOf("signalr", StringComparison.OrdinalIgnoreCase) > -1) {

		return _nextAppFunc(environment);

To use this in your owin startup code call the following


Just like a muscle technical skills can and will diminish over time if you don't take the time to regularly practice them.

About 8 years ago I took a C++ course at Weber State University and everything was great. I had just finished my C course, so C++ was fairly straightforward and I didn’t have any problems with the language. I was even able to teach myself C# using what I learned from C and C++ and have been using C# ever since.

Since that time, I have not written any C/C++ code. Zilch. That is, until just recently when I needed to write some C++ code for libsass-net that would add support for generating sourcemap files. I didn’t even need to really write any code, just hook into the existing code. Needless to say, it wasn’t that simple.

Problem after problem

Before I was even able to get started coding, I had already hit a problem after pulling the latest changes from libsass. The compiler kept complaining about a method named UNICODE, but I couldn’t figure out what the problem was. The only hint I had was that Visual Studio was highlighting this method differently than all the other methods.

After a while I finally figured out that a configuration setting in my project file was telling Visual C++ that I wanted to use Unicode strings, so there was a #define UNICODE being emitted by the compiler. I believe I may have found the solution quicker if I had been more familiar with the toolset.

Throughout the process of trying to figure out why things weren’t working I found myself having a hard time understanding what most of the code was actually doing. C++ is a very powerful and very terse language; I found myself struggling to keep track of all the symbols that were in front of me. I was lost in a world of template methods, void * pointers, and lots of other features I vaguely remember.

Keeping up to date

For me, I have decided that I should spend a little time to maintain at least a reading level of the various languages that I have learned over the years. I know that I won’t be able to easily maintain a fluency level in all the languages I have learned because I won’t be writing in them daily, but I can at least retain some of my skill set by reading other people’s code.

In fact, I think by maintaining the libsass-net project, I am in an ideal situation: I am required to write a minimal amount of C++ and need to have at least a minimal reading comprehension of the language. Hopefully I’ll be able to keep my skills somewhat up-to-date without having to go overboard and force myself to make my own project to practice.

Regardless of how you approach the problem, be aware of the skills that you are letting atrophy and take the time to practice them if those skills are important to you. If you do not, you may find yourself in a similar situation to my own, and it’s not the most pleasant place to be.

I learned the most amazing shortcut today that likely would have saved me years of time in the past. Full credit goes to my co-worker Brent Keller for this.

While doing some pair-programming with my co-worker Brent today I noticed something peculiar happen on his screen. It looked like he executed the query, but the results did not match what the query was doing at all. On the contrary, it was a large amount of meta-data about a table.

How is this possible? All you need to do is highlight an object in the query editor and press Alt + F1, then you will see a lot of useful meta-data in the query results window. No need to go traverse the tree in object explorer anymore!

  • When you execute this on a table, you’ll get very useful information such as information about the columns, indexes, constraints, and who references the table.
  • When you execute this on a table function you get the output columns and the parameters.
  • When you run this on a view you get the output columns.
  • When you execute it on stored procedures and scalar functions you get the input parameters.
  • When you execute it on types (I assume this works on user-defined types as well) you get information about the type.
  • I’m sure there are many other object types that this shortcut will work on

Hopefully this information will make you as happy it made me.

When life gives you an O(nk) algorithm, it's time to get creative and respond with your own logarithmic algorithm.

At DevResults we do a lot of work with maps. One such activity involves kml shape files of things like countries, states / provinces, cities, etc. While learning how to set up a new site, we grabbed the shape files for the United States off of GADM and started the import process.

An hour later, the process still hadn’t finished the first (and largest) shape file. Something was definitely wrong and so I dug in and found out where the code was stuck at and found that the code was iterating through all the points and combining them together into one big SqlGeography instance via the STUnion method. Below is an example of the code that was doing this

SqlGeography shape = polygons[0];
for (int i = 1; i < polygons.Length; i++) {
	shape = shape.STUnion(polygons[i]);

return shape;

When I started searching around for some alternatives to STUnion, I found this graph:

Classic O(nk) problem from the look of it.

Taming the beast

From the graph, we know that we can combine small shapes very quickly, and large shapes take a very long time. This means for our algorithm we want to minimize the number of operations we do on large shapes. With this optimization goal in mind, we can make a divide and conquer algorithm that will combine shapes into increasingly larger shapes where finally at the end we only have two shapes to merge.

First, lets get the implementation out of the way.

public static SqlGeography CombinePolygons(List<SqlGeography> polygons, int start, int end)
    if (start > end) return null;
    if (start == end) return polygons[start];

    int midpoint = (start + end) / 2;

    SqlGeography left = CombinePolygons(polygons, start, midpoint);
    SqlGeography right = CombinePolygons(polygons, midpoint + 1, end);

    // if both values have shapes, combine
    if (left != null && right != null)
        return left.STUnion(right);

    // if only one has a shape, pick whichever has a value
    return left ?? right;

So, the way this works is it recursively splits the list of polygons until it gets down to two single polygons, unions them together, then returns that shape, which then gets unioned with another, until in the end we have one shape. This means that most of our operations are done on small objects, thus minimizing the large objects we have to merge.

For example, consider 512 polygons.

Operation CountLeft SizeRight Size

We can easily see that we are now minimizing the number of operations that we are doing on large shapes.

Reviewing the solution

So how well does this work out? Well the first time I ran this, it never finished the first shape file in over and hour and a half while this algorithm lets me process this shape file in just under 4 minutes. A great result from a simple change in how we process the polygon shapes.