Setting Up a JavaScript Testing Environment in Visual Studio

I’m going to walk you through setting up your JavaScript testing environment in Visual Studio, but I’m not going to be explaining how to write Jasmine tests in this article. For that, I recommend going to the Jasmine documentation site for some good examples, or watching one of the several Pluralsight courses that discuss JavaScript testing using Jasmine.

We’re going to set up the testing framework, Jasmine, the test autorunner, Karma, and supporting Visual Studio tools for testing JavaScript. I highly recommend AngularJS for structuring your code in a very testable manner. It’s like the Best. Thing. Ever. But that’s another topic altogether. But without such a framework, JavaScript testing can be a nightmare, even with these testing tools.

Karma

In this first set of steps, we’ll be setting up Karma to run all of our Jasmine tests:

  1. For everything that follows, you’ll need Jasmine, so install it to your project containing your tests via NuGet. You can find it by searching jasmine.js.
  2. You’ll need Node.js in order to run the Karma test autorunner, so go to the Node.js site and click the big, green, shiny Install button. You can’t miss it. You’ll probably end up using Node.js for more than just Karma, so I recommend doing this whether or not you’re ready to use Karma.
  3. Installing Node.js also adds its runtime location to your system path, so even if you already have a command prompt open, you now need to open a new one to ensure the path exists inside the prompt.
  4. Now that Node.js is installed, and you have a new command prompt open, you can now run its version of NuGet, which is called npm, to install Karma.
  5. Run the following from the command prompt, which installs the Karma engine:
    npm install -g karma
  6. Run the following from the command prompt, which installs the command line interface for it:
    npm install -g karma-cli
  7. Although I did not have to do this, others had to. I am not sure why I did not need to, since I never explicitly install this before. For some reason, this was treated as a Karma dependency when I ran the above two commands, but it does not work for everyone. It installs the Jasmine support for Karma:
    npm install -g karma-jasmine

    Jasmine is the JavaScript testing framework that Karma will autorun for us.

  8. Similarly, I also did not need to install the Karma Chrome launcher separately, but others did need to. So run this command:
    npm install -g karma-chrome-launcher

    If you want Karma to use a different browser to execute the automated testing, you can install that instead. For example:

    npm install -g karma-safari-launcher

    Either way, a browser launcher is needed for Jasmine to run, so Karma needs to launch one during its automated testing process.

  9. Karma needs a configuration file (which happens to be a JavaScript file mainly consisting of an object for setting configuration options). Since this configuration file needs to be in the same folder where references to your JavaScript files are relative, you need to change to that folder before running the next command.
  10. To create the initial configuration file, run this from the command line:
    karma init

    (you may notice that karma is a DOS batch file that was installed in the above steps). It will prompt you for some values, but for now, press Enter at each question to accept the defaults. We’ll be editing the config file manually, afterwards. This will create a file called karma.conf.js.

  11. Edit the karma.conf.js file. This next step may take some trial and error. You need to specify an array of paths or file names (relative to the folder containing this config file) of all the JavaScript dependencies required for your tests. You can use wildcard characters as well. A single asterisk represents any set of characters (ex: app/*.js), where a double asterisk means to recurse through all subdirectories (ex: app/**/*.js means look for all JavaScript files within all subdirectories under app).
  12. Assuming you will be using Chrome and Jasmine, the config file should look something like this:
    // Karma configuration
    // Generated on Thu May 08 2014 13:19:36 GMT-0400 (Eastern Daylight Time)
    
    module.exports = function(config) {
      config.set({
    
        // base path that will be used to resolve all patterns (eg. files, exclude)
        basePath: '',
    
        // frameworks to use
        // available frameworks: https://npmjs.org/browse/keyword/karma-adapter
        frameworks: ['jasmine'],
    
        // list of files / patterns to load in the browser
        files: [
          'Scripts/**/*.js',
          'app/*.js',
          'app/**/*.js',
          'Tests/*.js'
        ],
    
        // list of files to exclude
        exclude: [
    
        ],
    
        // preprocess matching files before serving them to the browser
        // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
        preprocessors: {
    
        },
    
        // test results reporter to use
        // possible values: 'dots', 'progress'
        // available reporters: https://npmjs.org/browse/keyword/karma-reporter
        reporters: ['progress'],
    
        // web server port
        port: 9876,
    
        // enable / disable colors in the output (reporters and logs)
        colors: true,
    
        // level of logging
        // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
        logLevel: config.LOG_INFO,
    
        // enable / disable watching file and executing tests whenever any file changes
        autoWatch: true,
    
        // start these browsers
        // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
        browsers: ['Chrome'],
    
        // Continuous Integration mode
        // if true, Karma captures browsers, runs the tests and exits
        singleRun: false
      });
    };
    
  13. After saving these changes, you can now run all your tests via Karma by using this command from within the same folder as the config file. By the way, you could specify the config file and path (check out the Karma documentation), but for our purposes, we’ll be using the defaults:
    karma start
  14. As long as the command prompt is open, every change you make to the scripts under test will automagically trigger a re-test, thanks to the autoWatch: true setting in the config file.

Chutzpah

If you’re like me, and want the flexibility of running tests on demand via the IDE, you have at least a couple of choices. Follow these steps to install the Chutzpah Visual Studio extensions (it takes some chutzpah to name an extension “Chutzpah” and expect everyone to understand what that means :) ), that will add the ability to right-click on a test folder or test script to run a quick test:

  1. Create a reference for all of your dependent JavaScript files at the top of each JavaScript file under test (similar to what you would do to get JavaScript Intellisense work correctly within Visual Studio). As James of Code for Coffee suggests, it’s recommended you create a _references.js file, and place it with your main JavaScript files, and al references within that file would be relative to where it’s located. Then all you’ll need to do is include _references.js at the top of each of your JavaScript files under test.
  2. Install the Chutzpah test adapter and the Chutzpah context menu.
  3. Create a chutzpah.json file (which is Chutzpah’s configuration file) in the same folder where references to your JavaScript files are relative. It should look like this:
    {
        "Framework": "jasmine",
        "RootReferencePathMode":"SettingsFileDirectory"
    }
    

    This ensures that Chutzpah can find all JavaScript files under test relative to the config file’s location.

That’s basically it. Just right click a folder containing your JavaScript tests, or on one of the files if you want to just run its tests, and click Run JS Tests. Chutzpah also installs an option for showing code coverage, but I have not experimented with that yet.

ReSharper Unit Testing

If, like me, your Visual Studio extension tool of choice is ReSharper (R#), you may want to run its test runner instead. It should work fine, with one potential pitfall. If you try running your tests, but they all seem hung up, make sure you remove any references to jasmine.js from your _references.js file or from the top of any JavaScript files under test. I know that hurts, because now your editor won’t recognize the reference from within any of your Jasmine test files, leaving you with a bunch of blue squigglies. But for some reason, the R# test runner gets confused. I’m going to contact JetBrains about that.

image_pdfimage_print

JIT Learning

When I first entered the field of software development, in order to become a so-called expert, we needed to learn a handful of technologies. It was challenging, but it was doable.

This is no longer possible.

Today, we need to be able to apply JIT (just-in-time) learning techniques to keep up. It’s just not possible to learn everything about a single tool, much less every tool we’ll need to use on a given project. In Microsoft’s .NET framework alone, there are over 10,000 classes as of this writing. If you’re a .NET C# Web developer, you need to have at least a working knowledge of the .NET framework, the CLR, Visual Studio, C#, HTML, JavaScript, CSS, Windows, IIS, SQL (no matter what engine), and the specifics of a particular SQL engine. And in most systems, you may need to understand another handful of technologies, core concepts, and third party tools.

So when do you learn all of this? Just-in-time.

You can accumulate knowledge of the core technologies over several projects, yet still only touch the surface. And rarely does a new project come along where you wouldn’t need to learn at least one new technology you’ve never touch before, or may not have even heard of before. For example, on my latest project, I added Twitter Bootstrap, Telerik’s KendoUI, and Dapper to my arsenal. In addition, I’ve explored Font Awesome, and LESS for incorporation into a future release. I’ve also expanded my JavaScript and jQuery knowledge to make better use of those.

So how do you keep up with all these tools and technology? Well, you can anticipate everything you’d think you’d need to learn, but aside from a few educated guesses, you’d have to be a clairvoyant to keep up with the changes in our field. It’s often like using a waterfall SDLC. I don’t think it’s really works anymore. There are too many unseen forces working just under the radar, and you’ll constantly be blindsided.

How do I keep up with this stuff? Just-in-time learning. For the most part, I learn as I go. Since most new tools we need to use build upon the core concepts we’ve built up over our experiences, the learning curve is usually not so large for adding something new. Part of my strategy is using supplemental learning to build up that core skill set, which I’ll discuss as well.

This is my current strategy for learning. Since we all learn somewhat differently (in our own combination of kinetic, visual, and auditory resources), your mileage may vary:

  1. Research: Unless you have a team leader assigning which technologies to use on a project, you’ll likely be involved in researching the best solution for a particular requirement. For example, we wanted to start our project by using a framework to help drive the look and feel of our web apps, so we started comparing such tools. We decided upon Twitter’s Bootstrap framework. I watched intro tutorials, read reviews, viewed sample code, and experimented with the samples.
  2. Video (Passive): Once I’ve decided what I want to learn, I usually start by watching a series of videos on the topic. Pluralsight is easily my favorite choice for an intro of some of the most common technologies and tools, although more obscure or new tools may not (yet) be covered. YouTube is another great source for such tools. Of course, strongly supported tools may have their own video tutorials, although I usually find those lacking. It seems to be an afterthought for a lot of companies, and production is often poor or inconsistent. Since, in my role as a consultant I’m expected to be an expert on the tools I’m using (unless a new technology is dictated by the client), I normally use my breakfast time before business hours to watch these videos. It allows me to absorb myself into the technology in a passive manner, which helps get me acquainted before diving in hands-on. If you’re lucky enough to attend a local user group meeting on the topic, that’s also a great way to get an intro as well as allow for direct Q&A. But it’s rare to have such perfect timing, unless the technology you’re about to use is the new “flavor of the week,” and the rest of the world is learning about it at the same time.
  3. Video (Active): Although I’m still in more of a passive state of mind at breakfast, by lunch I’ve usually been in coding mode, so this is a good time to re-watch parts of the video and actually try out some of the examples being discussed. Although video is great for pausing and rewinding, it’s a bit awkward to pinpoint the exact locations of what you want to re-watch, so if example files are available for download, I prefer playing around with those. Be careful, though, since it’s too easy to have the examples do the work for you, since they’re usually already fully written. Without the hands-on (read: typing in yourself), it usually won’t sink in as quickly.
  4. Google / Bing == Stack Overflow: As you play around with examples, you’ll likely have some questions that aren’t yet explained by the point you’ve reached in the video course. I normally find that it’s easier to search for answers to my questions instead of trying to find it in the tool’s documentation (if it even exists). Since the best search results usually end up at Stack Overflow, I spend a lot of time reading answers there. Keep a close eye on the timestamp of the answers, though. They may be outdated. But if it’s a good answer, it may also have a direct link to the part of the documentation you’ll need.
  5. Web Articles (Blog and Otherwise): When it comes time to dive into a specific piece of the technology I’m trying to use while learning, I start focusing on specific online articles. Several years ago, I’d save and read magazine articles. Well, I mainly saved with the expectation that I’d eventually find the need to read some of those articles. I’d say that happened with 5% to 10% of those articles. But we don’t even need to do that anymore. Since many articles are available online, allowing for random access, the magazine is truly obsolete. I still subscribe to a couple, but I think that’s mainly to hold on to the memories of a foregone time. Besides, I’m sure they’re making the font on those things smaller every year. Or it’s my eyes :) Seriously, I’d often start an article in a magazine, only to finish it online.
  6. Books: With all this JIT learning, there’s still that nagging feeling that you could be doing things better. I feel like that all the time, and it used to bother me. No longer. I’ve learned to become more pragmatic over the years. Job # 1 is to deliver a solid solution, making it as maintainable as feasible. But refactoring should be built into subsequent work, whether or not you do some refactoring during the TDD (or otherwise, unit test) process (if your shop encourages that — which it should). This is the time to supplement your knowledge with a deeper understanding and best practices in the technology and tool you JIT-learned. This is where books become useful to me. Even if a book is inherently a bit outdated, it’s still useful, because core concepts and best practices live longer than specifics. I rarely read technical books cover-to-cover anymore. I may read a few introductory chapters, but then I’d skim through specific chapters based upon where I’m focused.
  7. Deep-Dive Videos: But I usually reach for a detailed video course instead of a book. Although Pluralsight has some deep-dive topics in addition to their introductory tutorials, I feel the TekPub videos complement them quite well, and focus more on the deep-dives. They’re usually opinionated, and they often focus on best practices, and make you really understand the topic in ways you’ve never thought of before. Watching someone code and think out loud at the same time is often as valuable as pair programming. Both sites (and there are others) are well worth the investment in your future.

In between my JIT learning cycles, I spend those free hours supplementing my knowledge with deeper dives, as I described in points 6 and 7, above. I use those breakfast and lunch sessions to fill in any gaps, and do some soul-cleansing refactoring in subsequent sprints based on that learning. Such exercises helped me become a better C# and JavaScript coder over the past year.

I also use the off-cycles to learn other technologies I predict with some certainty that I’d be using within the next year or so. For example, learning MonoTouch, MonoGame, and XNA in anticipation of implementing some app ideas and starting a new venture.

As developers, our education will always be an ongoing process. There is just so much to learn. We must develop a strategy just to keep up or get ahead of the game, yet remain current and productive. Although your strategy may differ from mine, hopefully I’ve provided some ideas to get you started.

image_pdfimage_print

Handling Session and Authentication Timeouts in ASP.NET MVC

There’s a lot more than meets the eye when you need to handle session and authentication timeout scenarios in ASP.NET MVC. For some reason, I expected this to be a no-brainer when I first worked on an app that needed this functionality. Turns out there several complications that we need to be aware of. On top of that, be prepared for the potential of a lot of test points on a single page.

Server Timeout Checks

We’ll create a couple of action filters to provide cross-cutting checks for timeout scenarios. The first will normally be hit when the browser session has timed out (because I’d set that to a shorter time span than authentication), but will also handle if the authentication has timed out first:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class SessionExpireFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        HttpContext ctx = HttpContext.Current;

        // If the browser session or authentication session has expired...
        if (ctx.Session["UserName"] == null || !filterContext.HttpContext.Request.IsAuthenticated)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                // For AJAX requests, we're overriding the returned JSON result with a simple string,
                // indicating to the calling JavaScript code that a redirect should be performed.
                filterContext.Result = new JsonResult { Data = "_Logon_" };
            }
            else
            {
                // For round-trip posts, we're forcing a redirect to Home/TimeoutRedirect/, which
                // simply displays a temporary 5 second notification that they have timed out, and
                // will, in turn, redirect to the logon page.
                filterContext.Result = new RedirectToRouteResult(
                    new RouteValueDictionary {
                        { "Controller", "Home" },
                        { "Action", "TimeoutRedirect" }
                });
            }
        }

        base.OnActionExecuting(filterContext);
    }
}

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class LocsAuthorizeAttribute : AuthorizeAttribute
{
    protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
    {
        HttpContext ctx = HttpContext.Current;

        // If the browser session has expired...
        if (ctx.Session["UserName"] == null)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                // For AJAX requests, we're overriding the returned JSON result with a simple string,
                // indicating to the calling JavaScript code that a redirect should be performed.
                filterContext.Result = new JsonResult { Data = "_Logon_" };
            }
            else
            {
                // For round-trip posts, we're forcing a redirect to Home/TimeoutRedirect/, which
                // simply displays a temporary 5 second notification that they have timed out, and
                // will, in turn, redirect to the logon page.
                filterContext.Result = new RedirectToRouteResult(
                    new RouteValueDictionary {
                        { "Controller", "Home" },
                        { "Action", "TimeoutRedirect" }
                });
            }
        }
        else if (filterContext.HttpContext.Request.IsAuthenticated)
        {
            // Otherwise the reason we got here was because the user didn't have access rights to the
            // operation, and a 403 should be returned.
            filterContext.Result = new HttpStatusCodeResult(403);
        }
        else
        {
            base.HandleUnauthorizedRequest(filterContext);
        }
    }
}

As you can see, for both attributes we’re using a session variable holding the user name as an indication if a session timeout occurred. We’re checking to see if either the browser session or the authentication has expired. I like to set the browser session to a shorter time period than authentication, because I end up running into extra issues to code around if the authentication expires first and the session is still active.

Then we’re checking if this is an AJAX request. Since we cannot immediately redirect upon such a request, we instead return a JSON result containing the string “_Logon_”. Later, within a JavaScript function, we’ll check for this as one of the possible values used to determine if a timeout occurred.

By the way, in the second attribute, HandleUnauthorizedRequest, we’re handling unauthorized scenarios different from timeouts (which is, unfortunately, how MVC 3 handles it out of the box). I got this idea from this article on StackOverflow. I believe the next version of MVC is supposed to provide better control for this by default.

The Timeout Warning Message Page

If this wasn’t an AJAX request, we simply redirect to a /Home/TimeoutRedirect page, which briefly displays a message explaining to the user that their session timed out, and that they’ll be redirected to the logon page. We use the meta tag redirect (after 5 seconds) in this view:

<meta http-equiv="refresh" content="5;url=/Account/Logon/" />

<h2>
    Sorry, but your session has timed out. You'll be redirected to the Log On page in 5 seconds...
</h2>

The JavaScript Check

The following JavaScript function would be called in the success, error, and complete callback functions on a jQuery.Ajax call. We use it to check if the response returned an indication that a timeout occurred, before attempting to process. It assumes that the parameter, data, is passed in from the AJAX call response.

This function expects that one of three returned values indicate a timeout occurred:

  1. A redirect was already attempted by the controller, likely due to an authentication timeout. Since an AJAX response is usually expecting a JSON return value, and since the redirect is attempting to return the full actual Log On page, this function checks the responseText for the existence of an HTML <title> of “Log On” (the default log on page title in an MVC app).
  2. A redirect is in the process of being attempted by the controller, likely due to an authentication timeout. Since an AJAX response is usually expecting a JSON return value, and since the redirect is attempting to return a full redirect (302) info page, this function checks the responseText for the existence of an HTML <title> of “Object moved” (the default 302 page title).
  3. If a session timeout occurred, the value “_Logon_” should be returned by the controller action handling the AJAX call. The above action filters check to see if the session variable “UserName” is null, which would indicate a session timeout, but not necessarily an authentication timeout.

This function also expects an AJAX action handler called TimeoutRedirect, on the Home controller. If you use a different controller or action, you’ll need to modify the URL specified in the function. The parameter, data, should be the response from an AJAX call attempt.

function checkTimeout(data) {
    var thereIsStillTime = true;

    if (data) {
        if (data.responseText) {
            if ((data.responseText.indexOf("<title>Log On</title>") > -1) || (data.responseText.indexOf("<title>Object moved</title>") > -1) || (data.responseText === '"_Logon_"')) thereIsStillTime = false;
        } else {
            if (data == "_Logon_") thereIsStillTime = false;
        }

        if (!thereIsStillTime) {
            window.location.href = "/Home/TimeoutRedirect";
        }
    } else {
        $.ajax({
            url: "/Home/CheckTimeout/",
            type: 'POST',
            dataType: 'json',
            contentType: 'application/json; charset=utf-8',
            async: false,
            complete: function (result) {
                thereIsStillTime = checkTimeout(result);
            }
        });
    }

    return thereIsStillTime;
}

The Forced AJAX Attempt

There may be times you want to check for a timeout scenario even if your app doesn’t require an AJAX call. That’s why the function is written so that if no parameter is passed in, a simple AJAX call will be made, forcing communication with the server in order to get back session and authentication information, so we can see if a timeout had occurred. There’s no way a browser would know this information until communication with the server is attempted. Once that AJAX call is made, this function will call itself with an actual data value that can now be interrogated.

Client-Side Calling Code Sample

The function returns true if no timeout occurred yet. We simply execute our callback logic if the result of this call is true (no timeout occurred):

$.ajax({
    url: "/MyController/MyAction",
    type: 'POST',
    dataType: 'json',
    data: jsonData,
    contentType: 'application/json; charset=utf-8',
    success: function (result) {
        if (checkTimeout(result)) {
            // There was no timeout, so continue processing...
        }
    },
    error: function (result) {
        if (checkTimeout(result)) {
            // There was no timeout, so continue processing...
        }
    }
});

Again, if you want to check for a timeout where no AJAX call is needed, such as for a click event when the user is navigating a list box, just call checkTimeout() with no parameter. Just note that a simple AJAX call will be injected, so be aware of potential performance impacts, and don’t overuse this. Also, be aware that some browsers, such as IE, will automatically cache AJAX results, and the call may not be made (and, therefore, the timeout check won’t occur). You may have to turn off AJAX caching ($.ajaxSetup({ cache: false })) in this case.

If you have any improvements on this, please post a comment. I’m always looking to tweak this. Thanks.

image_pdfimage_print

Wrestling With the Telerik MVC Grid Control (Part 3)

In part 2 of this series on the Telerik MVC Grid control, we discussed the back-end code for supporting the master level of our grid. Here’s a list of tasks we need to take care of for the detail grid:

  1. Implementing the detail view withing the grid component definition.
  2. Implementing additional JavaScript functions to handle the detail grid events.
  3. Implementing a View Model to support the detail grid.
  4. Implementing several controller actions to support grid CRUD functionality.
  5. Implementing helper methods.

If I don’t list all the code below (mainly, the controller actions), you can get all of it by downloading the full example, or keep up with any changes on GitHub.

Extending the Grid Declaration in the View

Realize that the detail grid generate detail grids (plural) at runtime, for each expanded master row. The way the detail level of a grid is handled, it’s pretty much another sophisticated “client template” hanging off the master row, built from another grid. That’s why the whole definition is wrapped in a ClientTemplate option:

.DetailView(details => details
    .ClientTemplate(Html.Telerik()
        .Grid<OrderViewModel>()
            .Name("Orders_<#= CustomerId #>")

Note the very explicit name we’re giving to each detail grid instance (via the Name option), making use of the master row’s CustomerId value. You’ll see its importance later on.

We’ll specify the detail columns next, starting with a column that contains our edit and delete buttons. Notice that we made sure only the DatePlaced column is filterable. In order to allow filtering at all, you must first apply this option to the grid (shown later), and then explicitly turn off filtering for the columns you don’t want it for. We’re also specifying a format for the DatePlaced column, and overriding some default column titles:

.Columns(columns =>
{
    columns.Command(commands =>
    {
        commands.Edit().ButtonType(GridButtonType.Image);
        commands.Delete().ButtonType(GridButtonType.Image);
    }).Width(80);

    columns.Bound(o => o.DatePlaced)
        .Format("{0:MM/dd/yyyy}");
    columns.Bound(o => o.OrderSubtotal)
        .Title("Subtotal")
        .Filterable(false);
    columns.Bound(o => o.OrderTax)
        .Title("Tax")
        .Filterable(false);
    columns.Bound(o => o.OrderTotal)
        .Title("Total")
        .Filterable(false);
    columns.Bound(o => o.OrderChannelName)
        .Title("Channel")
        .Filterable(false);
})

Similar to what we did in the master grid for customers, we’re going to want to support inserting new rows for orders at the detail level:

.ToolBar(commands =&gt; commands.Insert()
    .ButtonType(GridButtonType.ImageAndText)
        .ImageHtmlAttributes(new { style = "margin-left:0" }))

As in the master grid, we need to specify the DataBinding options; declaring the AJAX actions that the grid will call when performing CRUD operations on the detail rows. We’re also passing in customerId, since that’s needed for each method.

  • In the Select method, the customerId is used for deciding which customer to load the orders for.
  • In the Insert method, the customerId is used for deciding which customer to add a new order for.
  • In the Update method, the order is an Entity Framework navigation property of a customer, so customerId is used for fetching the customer.
  • In the Delete method, the order is an Entity Framework navigation property of a customer, so customerId is used for fetching the customer.
.DataBinding(dataBinding => dataBinding.Ajax()
    .Select("AjaxOrdersForCustomerHierarchy", "Home", new { customerId = "<#= CustomerId #>" })
    .Insert("AjaxAddOrder", "Home", new { customerId = "<#= CustomerId #>" })
    .Update("AjaxSaveOrder", "Home", new { customerId = "<#= CustomerId #>" })
    .Delete("AjaxDeleteOrder", "Home", new { customerId = "<#= CustomerId #>" }))

Now, since orderId uniquely identifies an order, we need to specify that as a DataKeys parameter used by both the Update and Delete methods:

.DataKeys(keys => keys
    .Add(o => o.OrderId)
        .RouteKey("OrderId"))

We’ll wire up our grid events next (discussed later):

.ClientEvents(events => events
    .OnError("onError")
    .OnDataBound("onDataBoundOrders")
    .OnEdit("onEditOrders"))

We’ll finish off our grid definition by making it pageable, with 15 rows per page, support keyboard navigation, specify that the detail grid is editable using a popup window, and making it sortable and filterable (keeping in mind that we shut off most filtering at the column level). Note that since this is actually a ClientTemplate, the whole detail grid needs to converted to an HTML string. Finally, we need to tack on a Render command, otherwise the grid won’t get displayed at all. For some reason, some examples on Telerik’s site omit this.

        .Pageable(pagerAction => pagerAction.PageSize(15))
        .KeyboardNavigation()
        .Editable(editing => editing.Mode(GridEditMode.PopUp))
        .Sortable()
        .Filterable()
        .ToHtmlString()
    ))
.Render();

Slight Detour — Fixing a Validation Bug in the Master Grid

Before we get to the supporting detail grid code, I want to revisit an issue I alluded to in part 2. Again, here is the CustomerViewModel:

public class CustomerViewModel
{
    [ScaffoldColumn(false)]
    public int CustomerId { get; set; }

    [Required]
    [DisplayName("Account Number")]
    public string AccountNumber { get; set; }

    [Required]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, FirstName, MiddleName", 
    		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, LastName, MiddleName", 
		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    [DisplayName("First Name")]
    public string FirstName { get; set; }

    [DisplayName("Middle Name")]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, LastName, FirstName", 
    		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    public string MiddleName { get; set; }

    [DisplayName("Middle Initial")]
    public string MiddleInitial { get; set; }
}

If you recall, I mentioned that any fields we mark with the [ScaffoldColumn(false)] attribute will not be displayed in the grid nor on the pop-up edit dialog used when we edit or add a customer. But there’s an additional side effect to us using this on the CustomerId field — our remote validation CheckDuplicateCustomerName method always returns a duplicate error, even if we’re editing an existing record. We’re passing CustomerId as an AdditionalFields field because we’re using it to allow us to ignore a duplicate error if the existing record is the current customer record. But, as it turns out, since we’re using [ScaffoldColumn(false)], it also hides CustomerId from the AdditionalFields parameter. Null is being passed into the validation method. So we have to do two things:

  1. Remove [ScaffoldColumn(false)] from CustomerId in the view model. Unfortunately, this causes CustomerId to be editable in the pop-up edit and add dialogs. So, we also need to…
  2. …add the following line to the onEditCustomers JavaScript function (the OnEdit master grid event handler):
$(e.form).find("#CustomerId").closest(".editor-field").prev().andSelf().hide();

Now we’ve forced CustomerId off of the pop-up, yet we can continue to use it in our remote validation method.

Detail Grid Events

Now that we solved that issue, here are the event handlers for the detail grid. I’m also including the replaceDeleteConfirmation helper function that’s shared with the master grid. The OnError event handler is also reproduced here, since it’s shared by both the master and detail grids as well. We’re using onError to display serious issues (normally caught in the catch blocks in controller actions), that we’re stuffing in the response header. I’d normally handle these more gracefully, but this is fine for a “quick & dirty”:

function onExpandCustomer() {
    $(".t-detail-cell").css({
        "padding-left": "80px",
        "padding-bottom": "30px"
    });
}

function onDataBoundOrders() {
    $(this).find(".t-grid-add").first().text("Add new Order").prepend("&lt;span class='t-icon t-add'&gt;");
    replaceDeleteConfirmation(this, "Order");
}

function onEditOrders(e) {
    var popup = $("#" + e.currentTarget.id + "PopUp");
    var popupDataWin = popup.data("tWindow");

    popup.css({ "left": "700px", "top": "400px" });
    //popupDataWin.center(); // Use if you'd rather center the dialog instead of explicitly postion it.

    if (e.mode == "insert")
        popupDataWin.title("Add new Order");
    else
        popupDataWin.title("Edit Order");

    var url = '@Url.Action("GetOrderChannels", "Home")';
    var orderChannel = $('#OrderChannelId').data('tDropDownList');
    orderChannel.loader.showBusy();

    $.get(url, function (data) {
        orderChannel.dataBind(data);
        orderChannel.loader.hideBusy();
        orderChannel.select(function (dataItem) {
            if (e.mode == 'edit') {
                return dataItem.Value == e.dataItem['OrderChannelId'];
            } else {
                return dataItem.Value == 1; // Default to Phone.
            }
        });
    });
}

function replaceDeleteConfirmation(item, itemType) {
    var grid = $(item).data('tGrid');

    $(item).find('.t-grid-delete').click(function () {
        grid.localization.deleteConfirmation = "Are you sure you want to delete this " + itemType + "?";
    });
}

Note that dynamically changing the “Add” button text has to be done differently for the detail grid than we did for the master. If you recall, we were able to change the button text for the master grid directly in the $(document).ready function. That’s because the button only exists once in the entire page. But since each master row requires its own “Add” button for adding orders, we have to change the button text as we databind the order rows for each customer, in the onDataBoundOrders event handler for OnDataBound. We’re also dynamically changing the “Delete” confirmation text in this function.

The other interesting function is the event handler for OnEdit, onEditOrders. We’re explicitly positioning the pop up dialog here, by first grabbing a reference to the pop up. You’l notice that we’re referencing the event parameter currentTarget.id. This is a reason why it’s important to uniquely name each detail grid, as mentioned earlier.

var popup = $("#" + e.currentTarget.id + "PopUp");

Once we have a reference to the pop up dialog, we need to grab a reference to its window (yes, although it appears redundant, the window is just a portion of the entire dialog).

var popupDataWin = popup.data("tWindow");

Now that we have a reference to each, we can dynamically position the pop up, either explicitly, or centering it by calling the undocumented center function of its window:

popup.css({ "left": "700px", "top": "400px" });
//popupDataWin.center(); // Use if you'd rather center the dialog instead of explicitly position it.

We’re also dynamically changing the pop up dialog’s title bar, depending upon the edit mode:

if (e.mode == "insert")
    popupDataWin.title("Add new Order");
else
    popupDataWin.title("Edit Order");

Using an Editor Template

Next, since we’re using a drop down list for the order channel, we’re dynamically populating the list. First, we build out the action we’re going to call via AJAX. Next, we create a reference to the drop down list. The next line of code displays an animated progress indicator for the AJAX call, which follows. Once the AJAX call completes, we bind the result to the list, get rid of the progress indicator, and initialize the currently selected order channel in the list:

var url = '@Url.Action("GetOrderChannels", "Home")';
var orderChannel = $('#OrderChannelId').data('tDropDownList');
orderChannel.loader.showBusy();

$.get(url, function (data) {
    orderChannel.dataBind(data);
    orderChannel.loader.hideBusy();
    orderChannel.select(function (dataItem) {
        if (e.mode == 'edit') {
            return dataItem.Value == e.dataItem['OrderChannelId'];
        } else {
            return dataItem.Value == 1; // Default to Phone.
        }
    });
});

The above loading of the order channel drop down implies the use of an editor template. We told the view to use an editor template for the Channel property by applying the [UIHint(“OrderChannel”)] attribute to it. Here’s the template code we’re using when displaying the editor pop up (which must be named OrderChannel.cshtml in order for the view and UIHint to find it):

@(Html.Telerik().DropDownList()
        .Name("OrderChannelId")
        .HtmlAttributes(new { style = "width:400px" })
)
<p />

We happen to be making use of Telerik’s drop down list in this same MVC extension library. If you have experience with this control, you may be wondering why we didn’t make use of the DataBinding method to load the channels into the list. Unfortunately, by the time the data is loaded, it’s too late to initialize the selected item. Therefore, we’re explicitly making the AJAX call within the onEditOrders event handler.

Here’s the order view model. Also note that I’m not validating DatePlaced, aside from making it a required field. I leave that as an exercise for you:

public class OrderViewModel
{
    [ScaffoldColumn(false)]
    public int OrderId { get; set; }

    [ScaffoldColumn(false)]
    public int CustomerId { get; set; }

    [Required]
    [DisplayName("Order Placed")]
    [DataType(DataType.Date)]
    public DateTime? DatePlaced { get; set; }

    [Required]
    [DisplayName("Subtotal")]
    public decimal? OrderSubtotal { get; set; }

    [Required]
    [DisplayName("Tax")]
    public decimal? OrderTax { get; set; }

    [Required]
    [DisplayName("Total")]
    public decimal? OrderTotal { get; set; }

    [ScaffoldColumn(false)]
    public int OrderChannelId { get; set; }

    [DisplayName("Channel")]
    [UIHint("OrderChannel")]
    public string OrderChannelName { get; set; }
}

Some Final Cosmetic Touches

There is one more event handler I added to show how you can dynamically position the detail grid. First, I added a declaration for an additional master grid event handler, OnDetailViewExpand. You may have seen a previous article I wrote on how to take advantage of this event in other ways:

.OnDetailViewExpand("onExpandCustomer")

This event handler simply adjusts the padding of the detail cell (which is actually the detail order grid for a customer):

function onExpandCustomer() {
    $(".t-detail-cell").css({
        "padding-left": "80px",
        "padding-bottom": "30px"
    });
}

Sample Application Download

Well, that completes my three part series. Again, you can download a full sample application, or keep up with possible changes on GitHub.

That’s the basics for creating a master / detail Telerik MVC grid, with a few extras thrown in to show you how to work around some idiosyncrasies. You can pretty much add additional detail levels in the same manner. Like I’ve mentioned, this is not the only way to go about it, but it has worked for me. If you have other ideas, please let me know.

image_pdfimage_print

Technical User Groups – The Tribe of Passionate Geeks

Happy 10th Birthday, INETA!The bug got me in the mid-70s. My math teacher in junior high, Mr. Blumenfeld, introduced us to a fascinating contraption on a tall stool that appeared, at first glance, to be an adding machine of some sort. But the thing was programmable, and came with this very nifty manual showing all the instructions you can program into it. I was mesmerized. He’d pull out the machine once a week and give a lesson on it. But an incident by a couple of students led him to punish the entire class and terminate those lessons. It was pretty devastating, especially since it had triggered a passion that has stayed with me through now.

It wasn’t until I entered high school two years later that I got my first taste of a “real” computer. I was introduced to BASIC by my programming teacher, Mr. Saperstein, on the Wang and Olivetti desktop machines. I strongly preferred the Olivetti, because it was a lot sleeker than the Wang, which was very “terminal” and plastic looking, and just looked a lot older. If I recall correctly, the Olivetti machine had a brownish casing, and seemed more modern. I made sure I started my projects on that machine so I had to be allowed to continue using it every class, since the disks where my projects were saved couldn’t be swapped between machines. We also had a Commodore Pet, but although the keyboard with all the strange graphic characters was interesting, students pretty much ignored that machine for some reason.

The first real program I wrote for class, of course, was a baseball simulation game, since I was always a huge fan. I spent hours at home creating dice games using stats from books, crunching numbers on the $100 calculator I got as a gift for my Bar Mitzvah (and which I STILL have to this day). That first programming project gave me an unbelievable feeling — to be able to create something out of nothing was so empowering!

I desperately wanted something to program at home. I wanted a home computer, but nothing was really available yet in the mid to late 70s (at least what I was aware of). But one day I noticed at a Consumers Distributing store that they were selling a programmable Texas Instruments calculator (TI-45?). When I finally saved up enough ($200?), I walked two miles to the store to buy it. I still have this somewhere. I came across it, along with its manual as I was cleaning out some old junk recently, but I have no idea where I placed it since.

In January 1980, a couple of months after I started dating my future wife, my parents gave me a choice. I can either go to Disney World with the rest of the family, or I can have my first real personal computer — a TRS-80 Model 1, with 4k of RAM and Level 1 BASIC. It was a no-brainer. First, the computer lasted a lot longer than the trip, and second (and more importantly), I had just started going out with Lorri, and I didn’t want to go away. This was one of the easiest and best decisions I’ve ever made. I’ve never looked back from either benefit.

In the early 80s, the vast majority of my time was spent with Lorri or the TRS-80. One day, while working as a keypunch monitor / programming tutor at Brooklyn College, a friend (and fellow TRS-80 user) came over to show me the 80-Micro magazine he subscribed to for the TRS-80. As I started skimming through it, I was shaking so strongly from excitement, it must have been visible to all those around me. This brought my little computer to a whole new level. I was introduced by the “community” of users to so many things I didn’t realize the machine was capable of.

Although this wasn’t actually a user group experience, it was my first taste of what being part of a larger community of “tribe members” felt like. I had discovered that there were many other people out there who shared my passion, and who I could learn from. It was addicting. I devoured everything about the computer, and all other computers in my life from that point forward.

It wasn’t until the late 80s that I had my first exposure to a real user group. A friend of mine brought me to the Clipper User’s Group at MLK High School on Amsterdam Avenue in NYC. Wow — these are my people! I was hooked. I went to every meeting from that point forward for several years, while Clipper was my primary programming tool, and met some amazing people. That group also kick-started me on getting my first commercial product released, when my business partner and I did our first demo at one of the meetings.

That group also became the model for regular team meetings I held for my consulting company throughout the 90s. We were all passionate about programming, and it was a way for us to get together to learn and discuss technology for technology’s sake — not just in a work environment.

When I closed up my company, and started working at my next job, I finally got involved full-time with Microsoft technologies. But because I worked far from NYC, I rarely went to user group meetings anymore. Occasionally, I’d attend a developer’s conference or something. It was a rare, exhilarating experience being around like-minded people in a learning environment. It continued to stoke the flames of my passion for programming. But as some of you may know from earlier blog posts, my career started to move away from programming into management during the mid to late 2000s. I’d try to get to .NET user group meetings in NYC, and still try to get to developer conferences (on my own dime, now that I was in management) as often as possible. But I started to feel like an outsider. I still felt like the attendees and speakers were part of my tribe, but I started to feel like an impostor.

Since I was unhappily in management, I needed to do something to keep up with the development world, as well as stay connected to what I considered my tribe. I did find a user group close to work, in Stamford, CT, but it was held with less and less consistency. It was getting more difficult for me to make it to user group meetings in NYC, but when I attended one in late 2006, I asked one of the leaders of the NYC .NET Developer’s Group, Bill Zack, if I could make an announcement. Although I had absolutely no idea yet how and where I could pull this off, I announced that I wanted to start a .NET user group up in Westchester, near where I lived. I mentioned that I’d make further announcements when it became a reality, and asked people to contact me if they’d be interested in a group in that area. It was after this meeting that Bill and Peter Laudati, the Microsoft Developer Evangelist in the area, made me aware of INETA (which is celebrating their 10th anniversary this month). They gave me some suggestions, including resources from INETA, for getting the group started.

I dug through the material, and my wife and I started looking for a local venue that could support such a group for free (since user groups are all virtually free for their members). We had very little luck. Most places wanted to charge, or required liability insurance. Sadly, I put off this idea for a while, still trying to get to the NYC meetings as often as possible.

In May of 2007, I was contacted by Bill Zack, who mentioned that Louis Edouard (who used to run the Stamford group at UConn, but was finding it overwhelming to do it all himself), and Leo Junquera, a fellow geek from the local business community, were looking to restart the group at UConn. They were looking for a third volunteer to help with the rebirth. I jumped at the opportunity. Since it was held close enough to Westchester as well, and was easier for local folk to get to rather than NYC, we decided to restart the group under a new name: The Fairfield / Westchester .NET User Group. We had our first meeting in June, 2007, and the rest, as they say, is history.

Since then, I’ve been heavily involved in running eight code camp events and two monthly user groups (we also run a SQL Server group at UConn). It’s been incredible. Not only am I living the dream of actively participating in a community of like-minded “geeks,” it’s also given me the confidence to get back into full-time development, and I no longer feel like an outsider impostor. This is my tribe, and we share a common passion. We love to learn, and our interest in development lives well beyond the walls of work.

Happy 10th birthday to INETA, the group which is instrumental in building our tribe of passionate geeks! If you’re a like-minded individual, and you really care about this field (and your career), I urge you to join a local user group. You can find one near you by checking the listings on the INETA site.

image_pdfimage_print