Baseball Analogy for Scrum

The people where I work sometime question why we use points in scrum instead of hours. Being that we have an unusual number of Baseball fans, I wrote the following to help use that as an analogy to why we use points.

Although Scrum uses Rugby metaphors, I am going to rephrase them into Baseball metaphors, since most everyone here can relate more to that.

I am going to give away the punch line here: points are a measure of value to the company, not a measure of time.

Let’s look at a baseball game for an analogy. No matter how much effort or time it takes to get to home base, you only get one point for that activity.

 First Inning

  1. Dustin Pedroia comes up to the plate. He swings, and hits a home run. The Red Sox score 1 point. Time: 5 minutes.
  2. Daniel Nava is up, but gets an out while swinging.
  3. Christian Vazquez is up next. The pitcher ends up giving him a walk to first base.
  4. Next up is Jonathan Herrera who hits a basic ground ball, and makes it to first base. Now bases two and one are loaded.
  5. Allen Craig bats next. He misses the first three pitches and is declared out.
  6. David Ortiz hits a nice fly ball to left field. The catcher misses it. He runs to 2nd base. Christian Vazquez scores a point. Bases three and two are loaded. Time for this point: 20 minutes.
  7. Brock Holt, up next, hits a nice ball to right field. It is caught. The inning is over.

So what happens to the “work” that went into loading bases three and two? It is lost due to an unfortunate side effect of the baseball rules. In the working world that effort is not lost, however, it can’t be counted until completed.

If an inning was a sprint, when the Red Sox are up for bat again, the two players would return to their plates and they would resume where they left off.

Inning Two (preloaded, like a sprint)

  1. David Ortiz is on 2nd base. Jonathan Herrera is on 3rd base.
  2. Mookie Betts hits a home run. 3 points are scored. Time: 5 minutes.

The Red Sox have scored a total of 5 points. The time spent was 30 minutes. The average time to score 1 point is therefore 6 minutes. With this baseline established you would plan future sprints using 6 minutes per point. This is how you plan the amount of work to complete, some will take less, some will take more, but the law of averages will balance it out, because you don’t know how long it will actually take until you start doing it. When work in progress rolls over to the next sprint, you will get the credit there instead for much less time in that sprint.

Points loosely relate to effort and / or time, but not directly. For instance, in basketball, a 3 point basket is from a longer distance than a 1 point basket. Both shots would take (roughly) the same time. The difficulty is more for the longer one. It might also take more attempts to score the 3 point shot, thus translating the effort into time.

So to wrap this up: points are a measure of value, just like in any sporting game. If  a set of work provides no value to the company, then the hours spent on the work are wasted.

Hope you found this useful.

Scheduling Reports through code on ActiveReports Server

We have been evaluating ActiveReports Server for integration into our web application and I have to say I am very impressed. The features (when combined with the accompanying Active Reports product for canned reports) are very rich, and we are using many of them: custom security provider, embedded designer control, push reporting, etc.

One need we have is to automatically subscribe some of our users to canned reports when their accounts are made. The good thing about ActiveReports Server is it has a full web service API that is used by their designer control, so anything that is done there can be done with your own code. The bad side is that it is currently unsupported to do so, and thus is undocumented.

So for those who may be trying to do the same thing, I am posting this article showing how I achieved this using their “DesignerService”.

Basically the way I implemented this was to make a wrapper service to abstract their API from my web application. Jumping into the code, it looks like this:

using System;
using System.Linq;
using System.ServiceModel;
using MRMPortal.Service.ActiveReports.DesignerService;

namespace MRMPortal.Service.Services
    public static class ActiveReportsService
        public enum ReportFrequency

        public static bool SubscribeToReport(string token, string reportName, string emailAddress, ReportFrequency frequency)
            var sharedScheduleName = frequency.ToString();
            var scheduleName = "Automatically Scheduled " + sharedScheduleName;
            var subject = reportName + " " + sharedScheduleName + " Report";
            var body = "This report subscription was automatically scheduled by the ARKS system on " + DateTime.Now.ToLongTimeString() + ".";

            var designerService = new DesignerServiceClient();

            // get info for the desired report
            var reports = designerService.GetList(token, StorageType.Report);
            var targetReport = reports.ItemDescriptions.Single(r => r.Name == reportName);

            // get info for the desired schedule
            var policyTemplates = designerService.GetList(token, StorageType.SharedPolicyTemplate);
            var targetPolicyTemplate = policyTemplates.ItemDescriptions.Single(r => r.Name == sharedScheduleName);

            // see if the policys already exists (e.g. is was created before, or maybe begun but not completed)
            var policy = GetReportPolicy(token, reportName, scheduleName);

            if (policy == null)
                // create the policy
                var createResult = designerService.CreatePolicy(token, targetReport.Id, scheduleName, PolicyType.ScheduledPolicy, targetPolicyTemplate.Id);

                // get the new policy
                policy = createResult.Policies[0];

            // add a cache policy
            var cachePolicy = new CachePolicy();
            cachePolicy.AbsoluteExpirationAppendage = 604800;
            cachePolicy.Priority = CacheItemPriority.Normal;
            cachePolicy.SlidingExpiration = -1;
            policy.CachePolicy = cachePolicy;

            // set policy properties
            policy.ModifiedBy = "Dominic Hall";
            policy.ModifiedDate = DateTime.Now;

            // set the recipient
            var distribution = new EMailDistribution();
            distribution.Subject = subject;
            distribution.MessageBody = body;
            distribution.To = new[] { emailAddress };
            distribution.AsAttachment = true;
            distribution.AsLink = false;
            policy.Distribution = distribution;

            // create render options
            var renderOptions = new RenderOptions();
            renderOptions.Extension = "PDF";
            renderOptions.Name = scheduleName;
            renderOptions.ReportId = targetReport.Id;
            renderOptions.ReportType = ReportType.SW;

            // create journal entry
            var journalEntry = new JournalEntryTemplate();
            journalEntry.Job = renderOptions;
            journalEntry.EntryPriority = JournalEntryPriority.Normal;
            policy.JournalEntry = journalEntry;

            // create day list: what day(s) the report gets sent out on
            var dayList = new Day[]
                new Day() {DayOfWeek = DayOfWeek.Monday, Ordinal = 0}

            // create recurrance rule
            var recurranceRule = new RecurrenceRule();
            recurranceRule.Frequency = Frequency.Daily; // This is how often the schedule is processed, not how often it is sent.
            recurranceRule.Interval = 7; // once every seven days
            recurranceRule.ByDayList = dayList;
            recurranceRule.StartDayOfWeek = DayOfWeek.Sunday;

            // create recurrance set
            var recurranceSet = new RecurrenceSet();
            recurranceSet.RecurrenceRules = new[] { recurranceRule };
            recurranceSet.ExceptionDates = new DateTime[] {};
            recurranceSet.ExceptionRules = new RecurrenceRule[] { };
            recurranceSet.RecurrenceDates = new DateTime[] { };
            recurranceSet.StartDate = new DateTime(2012, 10, 23, 12, 0, 0, DateTimeKind.Utc); // this must match the start date in the shared template

            // create and attach schedule
            var schedule = new Schedule();
            schedule.Enabled = true;
            schedule.RecurrenceSet = recurranceSet;
            policy.Schedule = schedule;

            // update the policy with the new settings
            var saveResult = designerService.SavePolicy(token, policy);

            // if an error happened, throw and exception
            if (saveResult.Error != null)
                throw new ServiceActivationException(saveResult.Error.Description);

            // return true to indicate success
            return true;

        public static Policy GetReportPolicy(string token, string reportName, string policyName)
            var designerService = new DesignerServiceClient();

            // get info for the desired report
            var reports = designerService.GetList(token, StorageType.Report);
            var targetReport = reports.ItemDescriptions.Single(r => r.Name == reportName);

            var policies = designerService.GetPolicies(token, targetReport.Id);
            var policy = policies.Policies.SingleOrDefault(p => p.Name == policyName);

            return policy;

So what is happening here? In a nutshell, Schedules are types of “Policies”, and a report can have a bunch of policies. So we need to add a Policy of type “PolicyType.ScheduledPolicy”. So we use the API method “CreatePolicy” to make a placeholder for the policy. This placeholder, however, doesn’t have any distribution information, it only has the type of Shared Schedule to use, for example, “Weekly”. So we need to then add all the additional properties to the policy and resubmit it through the “SavePolicy” API method.

I perform the additional step of seeing if the policy is there before I create a new one, so that this code can be used to perform an update as well, or at least won’t crash if the schedule already exists.

Some nuances to be aware of:

  1. Even though this is not a “cache” policy, it needs to have CachePolicy settings applied to it.
  2. The renderOptions ReportType is “ReportType.SW” for the internal ActiveReports Server reports. They also have ReportType.AR and ReportType.DDR to support their other canned report types.
  3. The recurranceSet StartDate MUST MATCH EXACTLY the start date of the Shared Schedule that was used

The test code to run this looks like so:

    public void ActiveReportsService_SubscribeToReport_ReturnsCorrectData()
        // Arrange
        const string token = "1"; // The token for your user
        const string reportName = "MyReport"; // The name of the report to schedule
        const string emailAddress = ""; // The email address to receive the subscription
        const ActiveReportsService.ReportFrequency frequency = ActiveReportsService.ReportFrequency.Weekly;

        // Act
        var result = ActiveReportsService.SubscribeToReport(token, reportName, emailAddress, frequency);

        // Assert
        Assert.AreEqual(true, result);

Most of this I know through reverse engineering their API. Some of it I know through the help of Bhupesh Malhotra from Component One and his excellent assistance with getting over roadblocks. If anyone tries this, or has tried it, and knows better ways to do anything here, I would love to share experiences with you. Or leave ideas in the comments.

Geocode addresses in .Net using the Google Geocoding API

Ten years ago I had a project that required finding locations within a certain radius of a given coordinate. Back then, the US Postal Service had a database of latitude-longitude coordinates for every zip code in the US. We loaded that monthly into a local database and wrote custom code to perform look-ups.

Recently I had to do the same thing, but the world is a much different place now. A quick web search on Geocoding produces several free web services that provide much better data, real time, and worldwide. I settled on the Google solution given the fact that Google will probably be around for a while, and their data is very reliable. Note that this service does limit you to 2500 queries a day, which is many more than we will need for this application. You can read all about the details of their service here:

Basically, this service uses REST calls accepting a simple address string and returning either XML or JSON. I chose JSON for it’s simplicity, but had to use an open source library for parsing the data (more on this later). Two parameters are required: sensor and address. Address is self-explanatory, but “sensor” tells Google if we are using a hardware location sensor such as from a phone. In this case, we are not, so it is false. For an example of an address look-up, enter the following into your browser and you will see the JSON results returned. Ave,+New+York,+NY+10118

As you will see, a bunch of data is returned. For my purposes all I care about is the Latitude and Longitude, and also the Status just to make sure the data call succeeded. So I created a custom class in my project to retain this information.

public class GeoData
   public string Status { get; set; }
   public double Latitude { get; set; }
   public double Longitude { get; set; }

So how do we execute this service call from .Net and parse the results into the GeoData container?

Unit Tests

In the spirit of TDD, I first created these unit tests to confirm the service is returning correct data:

public void GeocodeAddress_Returns_Success()
   // Arrange
   const string address = "350 5th Ave, New York, NY 10118";

   // Act
   var result = GeoService.GeocodeAddress(address);

   // Assert
   Assert.AreEqual("OK", result.Status);

public void GeocodeAddress_Returns_Correct_Location()
   // Arrange
   const string address = "350 5th Ave, New York, NY 10118";

   // Act
   var result = GeoService.GeocodeAddress(address);

   // Assert
   Assert.AreEqual(40.74807, result.Latitude);
   Assert.AreEqual(-73.984959, result.Longitude);

Parsing JSON

Executing the call to the service returns JSON, which is basically just a string of data formatted specifically for JavaScript. Since I needed to get the data into a .NET structure, I chose to use the excellent JsonFx open source component (which is installable through Visual Studio using the Nuget package manager) for this task. The JsonFx code and documentation can be found here:

The feature of this library that I used converts the JSON result string into a .Net 4.0 dynamic object that can then be used to extract just the data elements that I care about.

The Code

I created a class in my project library to encapsulate the Google service call. This is typically a good practice, and results in the benefit of allowing us to swap out the provider, say from Google to Bing, without breaking any of the consuming code.

using JsonFx.Json;

namespace Project.Services
   public static class GeoService
      public static GeoData GeocodeAddress(string address)
         const string geoService = "<a href=";">";</a>

         // retrive the json geo data
         var encodedAddress = HttpUtility.UrlEncode(address);
         var jsonResult = new WebClient().DownloadString(geoService + encodedAddress);

         // convert the json result data to an object
         var reader = new JsonReader();
         dynamic jsonObject = reader.Read(jsonResult);

         // populate our typed object from the dynamic one.
         var geoData = new GeoData();
         geoData.Status = jsonObject.status;
         geoData.Latitude = jsonObject.results[0];
         geoData.Longitude = jsonObject.results[0].geometry.location.lng;

         return geoData;

The code should be pretty easy to follow. It does the following:

  1. URL Encodes the address string so that spaces are replaced with “+” characters, etc., so it will work as part of a URL
  2. Uses the WebClient object to execute the web service call and retrieve the JSON result as a string.
  3. Uses the JsonFx JsonReader to load the data into a dynamic object
  4. Creates a new GeoData object and maps it’s three properties from the dynamic JSON object

If you combine these pieces and execute the tests, you will see they pass.

That is it.. easy! We can now Geocode any address instantly and for free.

Force the Photo Screen Saver through Domain Policy

After searching for a way to push out settings to have everyone in the company get a photo screen saver with predefined photos running, I was able to piece out a solution. The steps to enable this are listed below. If there are issues with this or a better way to do a section, please let me know.

Overall, there are three steps to accomplish this. They are:

  1. Create a batch file to copy the photos from a file share to a local folder on the user’s computer, and put this into the logon script of the GPO policy
  2. Set the Administrative Templates to force the correct screen saver to engage
  3. Set the registry keys to direct the screen saver to use the user’s local photo folder

Note: in these examples I have a network file share called “Software” on the fileserver “FS1” that all users have read access to. That contains a folder called “Screen Saver Photos” that is used as the source for the photos. This folder is copied automatically to the user’s C drive when they log on. The reason we just don’t point the screen saver to the share is that we want the screen saver to work when users are disconnected, such as offsite with a laptop.

Step 1: Create a batch file to copy the photos from a file share to a local folder on the user’s computer, and put this into the logon script of the GPO policy

Create a batch file on the domain controller called “ScreenSaverPhotoCopy.bat”. Edit this file and place the following four lines into it. This batch file will run when users log into their machines. These lines map the network drive, make the local folder, purge old files from the folder, and refresh the files from the share, respectively.

net use s: \\fs1\Software
mkdir “C:\Screen Saver Photos”
del /Q “C:\Screen Saver Photos\*.*”
xcopy “S:\Screen Saver Photos” “C:\Screen Saver Photos”

On the domain server, open “Administrative Tools -> Group Policy Management”

Drill into your domain and pick the policy you want to effect, such as “Default Domain Policy”


Right click and choose “Edit…”. The Group Policy Management Editor will open.

Open “User Configuration -> Windows Settings -> Scripts (Logon/Logoff)”


Double click Logon, this opens Logon Properties


You need to move the batch file into a particular store on the server, so click the “Show Files…” button. Copy the “ScreenSaverPhotoCopy.bat” file that you made into this location. Close it.


Back on the Logon Properties screen, click “Add…”

Click browse and select the batch file you just copied up.


Click OK and you will see


Step 2: Set the Administrative Templates to force the correct screen saver to engage

Back on the Group Policy Management Editor, Expand the “User Configuration –> Policies –> Administrative Templates –> Control Panel” node and click the “Display” item.


This lists domain policy settings that affect themes, including screen savers. Enable all the items you want in effect, depending on how strict you want things. In this case, I enabled “Enable Screen Saver”, “Password protect the screen saver”, “Screen saver timeout”, and “Force specific screen saver”. For that last one, set the screen saver to “PhotoScreensaver.scr” as so:


Step 3: Set the registry keys to direct the screen saver to use the folder user’s local photo folder

In this step you need to pull the registry settings for the Photo Screen Saver off of a Windows 7 client machine and put them into the domain policy. The Photo Screen Saver uses a particular series of registry keys, and for some reason encrypts the path to the photos folder, so you cannot hand enter this. So the process is, configure the screen saver on a Windows 7 machine, export the registry settings, import them on the domain controller, then add the key path to the policy.

On a Win7 machine, configure the Photo Screen Saver.

Pick “Photos”


Click “Settings…”.


Browse to the local photo folder, in this case “C:\Screen Saver Photos”. Also set the speed and if you wan them to shuffle.

Run Regedit and drill down into “HKEY_CURRENT_USER\Software\Microsoft\Windows Photo Viewer\Slideshow\Screensaver”. Here you will see three keys, one called EncryptedPIDL which is the encrypted path to the photos folder, one for shuffle, and one for speed.


Right click the “Screensaver” folder and choose “Export”. Save the file as “PhotoKeys.reg”.

Copy the file to your domain controller.

Double click the “PhotoKeys.reg” to import them into the domain controller’s registry. You will see a warning like so:


If you want you can repeat the registry steps from the Windows 7 machine to confirm the keys were imported properly.

Back in the “Group Policy Management Editor”, drill into “User Configuration –> Preferences –> Windows Settings –> Registry”


Right click “Registry” and choose “New –> Registry Wizard”

In the “Registry Browser” window, choose “Local Computer”. Note: I did try to use “Another Computer” to skip the registry import steps above, but was unable to get this to work.


Pick the registry key to include, which is the same one we imported above. Check off the four keys that are listed under the “Screensaver” folder.


Click Finish. This will create a folder in the policy under Registry, named “Registry Wizard Values”. I don’t think this name matters, but I changed it to “Photo Screen Saver Settings” for clarity. You can also verify the keys are there and have the correct values.


Close the Group Policy Management Editor


Now test logging off and back in on one of your Domain Windows 7 machines, and you should see that the local photos folder was created and the photos copied over from the share and your screen saver settings in place (assuming you allowed them to be viewed / changed). Wait the appropriate time and you should see your photos cycling in all their glory.

DateTime Extension Methods for first and last days of the month

I have seen several articles on how to derive the first and last day of the month from a given date. This method uses .Net 4.0 extension methods, which is a more elegant solution.

using System;
namespace MyLibrary.Extensions
   public static class DateTimeExtensionMethods
      public static DateTime FirstDayOfMonth(this DateTime value)
         return new DateTime(value.Year, value.Month, 1);

      public static DateTime LastDayOfMonth(this DateTime value)
         return new DateTime(value.Year, value.Month, 1).AddMonths(1).AddDays(-1);

What if the car industry was audited?

Recently I went through a battery of audits at work, and each one has succeeded in stripping away our ability to get work done in anything resembling a timely manner. I have been wondering what they would do if they audited the auto industry as strictly as the audit IT…

Auditor: We have looked at your cars, and their design presents a huge security risk in that the same person, the driver, is allowed to operate both the speed and direction at the same time. This risk presents opportunities for severe abuse including running people over, smashing through buildings, or running other cars off the road.

Car Designer: We have been building cars the same way for over 50 years, and there has never been an issue.

Auditor: Do you have data to prove that? We didn’t think so. We have statistics that indicate that throughout those 50 years, thousands of cars have run other people over, smashed through buildings, and have run other cars off the road.

Car Designer: Well, what would you suggest as a possible solution?

Auditor: Our recommendation is to move the steering wheel to the passenger side, so that two people are required to drive a car; one will control the speed and the other one will control the steering. This will enforce a checks and balances pattern that will remove the ability for any one individual to abuse their driving power for malicious intent.

Car Designer: But what if the two drivers decide to work together to run someone over?

Auditor: That is why we are also recommending that a barrier be installed between the drivers so that they cannot communicate intent, thus eliminating the ability for them to coordinate such a malicious act.

Car Designer: If they cannot communicate, how will they be able to drive the car in a coordinated fashion?

Auditor: We are not trying to eliminate communications. That would be crazy. Therefore, we recommend that a small trapdoor be installed in the barrier so that notes can be passed back and forth, such as “turn left” or “slow down”.

Car Designer: But by the time a note is written and passed over, the original intent will have been lost, and someone will have been run over, a building will have been smashed through, or a car will have been driven off the road.

Auditor: That is out of the scope of this audit. Another firm will be coming in 3 months from now to assess these potential issues.