JIT Learning

When I first entered the field of software development, in order to become a so-called expert, we needed to learn a handful of technologies. It was challenging, but it was doable.

This is no longer possible.

Today, we need to be able to apply JIT (just-in-time) learning techniques to keep up. It’s just not possible to learn everything about a single tool, much less every tool we’ll need to use on a given project. In Microsoft’s .NET framework alone, there are over 10,000 classes as of this writing. If you’re a .NET C# Web developer, you need to have at least a working knowledge of the .NET framework, the CLR, Visual Studio, C#, HTML, JavaScript, CSS, Windows, IIS, SQL (no matter what engine), and the specifics of a particular SQL engine. And in most systems, you may need to understand another handful of technologies, core concepts, and third party tools.

So when do you learn all of this? Just-in-time.

You can accumulate knowledge of the core technologies over several projects, yet still only touch the surface. And rarely does a new project come along where you wouldn’t need to learn at least one new technology you’ve never touch before, or may not have even heard of before. For example, on my latest project, I added Twitter Bootstrap, Telerik’s KendoUI, and Dapper to my arsenal. In addition, I’ve explored Font Awesome, and LESS for incorporation into a future release. I’ve also expanded my JavaScript and jQuery knowledge to make better use of those.

So how do you keep up with all these tools and technology? Well, you can anticipate everything you’d think you’d need to learn, but aside from a few educated guesses, you’d have to be a clairvoyant to keep up with the changes in our field. It’s often like using a waterfall SDLC. I don’t think it’s really works anymore. There are too many unseen forces working just under the radar, and you’ll constantly be blindsided.

How do I keep up with this stuff? Just-in-time learning. For the most part, I learn as I go. Since most new tools we need to use build upon the core concepts we’ve built up over our experiences, the learning curve is usually not so large for adding something new. Part of my strategy is using supplemental learning to build up that core skill set, which I’ll discuss as well.

This is my current strategy for learning. Since we all learn somewhat differently (in our own combination of kinetic, visual, and auditory resources), your mileage may vary:

  1. Research: Unless you have a team leader assigning which technologies to use on a project, you’ll likely be involved in researching the best solution for a particular requirement. For example, we wanted to start our project by using a framework to help drive the look and feel of our web apps, so we started comparing such tools. We decided upon Twitter’s Bootstrap framework. I watched intro tutorials, read reviews, viewed sample code, and experimented with the samples.
  2. Video (Passive): Once I’ve decided what I want to learn, I usually start by watching a series of videos on the topic. Pluralsight is easily my favorite choice for an intro of some of the most common technologies and tools, although more obscure or new tools may not (yet) be covered. YouTube is another great source for such tools. Of course, strongly supported tools may have their own video tutorials, although I usually find those lacking. It seems to be an afterthought for a lot of companies, and production is often poor or inconsistent. Since, in my role as a consultant I’m expected to be an expert on the tools I’m using (unless a new technology is dictated by the client), I normally use my breakfast time before business hours to watch these videos. It allows me to absorb myself into the technology in a passive manner, which helps get me acquainted before diving in hands-on. If you’re lucky enough to attend a local user group meeting on the topic, that’s also a great way to get an intro as well as allow for direct Q&A. But it’s rare to have such perfect timing, unless the technology you’re about to use is the new “flavor of the week,” and the rest of the world is learning about it at the same time.
  3. Video (Active): Although I’m still in more of a passive state of mind at breakfast, by lunch I’ve usually been in coding mode, so this is a good time to re-watch parts of the video and actually try out some of the examples being discussed. Although video is great for pausing and rewinding, it’s a bit awkward to pinpoint the exact locations of what you want to re-watch, so if example files are available for download, I prefer playing around with those. Be careful, though, since it’s too easy to have the examples do the work for you, since they’re usually already fully written. Without the hands-on (read: typing in yourself), it usually won’t sink in as quickly.
  4. Google / Bing == Stack Overflow: As you play around with examples, you’ll likely have some questions that aren’t yet explained by the point you’ve reached in the video course. I normally find that it’s easier to search for answers to my questions instead of trying to find it in the tool’s documentation (if it even exists). Since the best search results usually end up at Stack Overflow, I spend a lot of time reading answers there. Keep a close eye on the timestamp of the answers, though. They may be outdated. But if it’s a good answer, it may also have a direct link to the part of the documentation you’ll need.
  5. Web Articles (Blog and Otherwise): When it comes time to dive into a specific piece of the technology I’m trying to use while learning, I start focusing on specific online articles. Several years ago, I’d save and read magazine articles. Well, I mainly saved with the expectation that I’d eventually find the need to read some of those articles. I’d say that happened with 5% to 10% of those articles. But we don’t even need to do that anymore. Since many articles are available online, allowing for random access, the magazine is truly obsolete. I still subscribe to a couple, but I think that’s mainly to hold on to the memories of a foregone time. Besides, I’m sure they’re making the font on those things smaller every year. Or it’s my eyes :) Seriously, I’d often start an article in a magazine, only to finish it online.
  6. Books: With all this JIT learning, there’s still that nagging feeling that you could be doing things better. I feel like that all the time, and it used to bother me. No longer. I’ve learned to become more pragmatic over the years. Job # 1 is to deliver a solid solution, making it as maintainable as feasible. But refactoring should be built into subsequent work, whether or not you do some refactoring during the TDD (or otherwise, unit test) process (if your shop encourages that — which it should). This is the time to supplement your knowledge with a deeper understanding and best practices in the technology and tool you JIT-learned. This is where books become useful to me. Even if a book is inherently a bit outdated, it’s still useful, because core concepts and best practices live longer than specifics. I rarely read technical books cover-to-cover anymore. I may read a few introductory chapters, but then I’d skim through specific chapters based upon where I’m focused.
  7. Deep-Dive Videos: But I usually reach for a detailed video course instead of a book. Although Pluralsight has some deep-dive topics in addition to their introductory tutorials, I feel the TekPub videos complement them quite well, and focus more on the deep-dives. They’re usually opinionated, and they often focus on best practices, and make you really understand the topic in ways you’ve never thought of before. Watching someone code and think out loud at the same time is often as valuable as pair programming. Both sites (and there are others) are well worth the investment in your future.

In between my JIT learning cycles, I spend those free hours supplementing my knowledge with deeper dives, as I described in points 6 and 7, above. I use those breakfast and lunch sessions to fill in any gaps, and do some soul-cleansing refactoring in subsequent sprints based on that learning. Such exercises helped me become a better C# and JavaScript coder over the past year.

I also use the off-cycles to learn other technologies I predict with some certainty that I’d be using within the next year or so. For example, learning MonoTouch, MonoGame, and XNA in anticipation of implementing some app ideas and starting a new venture.

As developers, our education will always be an ongoing process. There is just so much to learn. We must develop a strategy just to keep up or get ahead of the game, yet remain current and productive. Although your strategy may differ from mine, hopefully I’ve provided some ideas to get you started.

image_pdf

Handling Session and Authentication Timeouts in ASP.NET MVC

There’s a lot more than meets the eye when you need to handle session and authentication timeout scenarios in ASP.NET MVC. For some reason, I expected this to be a no-brainer when I first worked on an app that needed this functionality. Turns out there several complications that we need to be aware of. On top of that, be prepared for the potential of a lot of test points on a single page.

Server Timeout Checks

We’ll create a couple of action filters to provide cross-cutting checks for timeout scenarios. The first will normally be hit when the browser session has timed out (because I’d set that to a shorter time span than authentication), but will also handle if the authentication has timed out first:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class SessionExpireFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        HttpContext ctx = HttpContext.Current;

        // If the browser session or authentication session has expired...
        if (ctx.Session["UserName"] == null || !filterContext.HttpContext.Request.IsAuthenticated)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                // For AJAX requests, we're overriding the returned JSON result with a simple string,
                // indicating to the calling JavaScript code that a redirect should be performed.
                filterContext.Result = new JsonResult { Data = "_Logon_" };
            }
            else
            {
                // For round-trip posts, we're forcing a redirect to Home/TimeoutRedirect/, which
                // simply displays a temporary 5 second notification that they have timed out, and
                // will, in turn, redirect to the logon page.
                filterContext.Result = new RedirectToRouteResult(
                    new RouteValueDictionary {
                        { "Controller", "Home" },
                        { "Action", "TimeoutRedirect" }
                });
            }
        }

        base.OnActionExecuting(filterContext);
    }
}

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class LocsAuthorizeAttribute : AuthorizeAttribute
{
    protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
    {
        HttpContext ctx = HttpContext.Current;

        // If the browser session has expired...
        if (ctx.Session["UserName"] == null)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                // For AJAX requests, we're overriding the returned JSON result with a simple string,
                // indicating to the calling JavaScript code that a redirect should be performed.
                filterContext.Result = new JsonResult { Data = "_Logon_" };
            }
            else
            {
                // For round-trip posts, we're forcing a redirect to Home/TimeoutRedirect/, which
                // simply displays a temporary 5 second notification that they have timed out, and
                // will, in turn, redirect to the logon page.
                filterContext.Result = new RedirectToRouteResult(
                    new RouteValueDictionary {
                        { "Controller", "Home" },
                        { "Action", "TimeoutRedirect" }
                });
            }
        }
        else if (filterContext.HttpContext.Request.IsAuthenticated)
        {
            // Otherwise the reason we got here was because the user didn't have access rights to the
            // operation, and a 403 should be returned.
            filterContext.Result = new HttpStatusCodeResult(403);
        }
        else
        {
            base.HandleUnauthorizedRequest(filterContext);
        }
    }
}

As you can see, for both attributes we’re using a session variable holding the user name as an indication if a session timeout occurred. We’re checking to see if either the browser session or the authentication has expired. I like to set the browser session to a shorter time period than authentication, because I end up running into extra issues to code around if the authentication expires first and the session is still active.

Then we’re checking if this is an AJAX request. Since we cannot immediately redirect upon such a request, we instead return a JSON result containing the string “_Logon_”. Later, within a JavaScript function, we’ll check for this as one of the possible values used to determine if a timeout occurred.

By the way, in the second attribute, HandleUnauthorizedRequest, we’re handling unauthorized scenarios different from timeouts (which is, unfortunately, how MVC 3 handles it out of the box). I got this idea from this article on StackOverflow. I believe the next version of MVC is supposed to provide better control for this by default.

The Timeout Warning Message Page

If this wasn’t an AJAX request, we simply redirect to a /Home/TimeoutRedirect page, which briefly displays a message explaining to the user that their session timed out, and that they’ll be redirected to the logon page. We use the meta tag redirect (after 5 seconds) in this view:

<meta http-equiv="refresh" content="5;url=/Account/Logon/" />

<h2>
    Sorry, but your session has timed out. You'll be redirected to the Log On page in 5 seconds...
</h2>

The JavaScript Check

The following JavaScript function would be called in the success, error, and complete callback functions on a jQuery.Ajax call. We use it to check if the response returned an indication that a timeout occurred, before attempting to process. It assumes that the parameter, data, is passed in from the AJAX call response.

This function expects that one of three returned values indicate a timeout occurred:

  1. A redirect was already attempted by the controller, likely due to an authentication timeout. Since an AJAX response is usually expecting a JSON return value, and since the redirect is attempting to return the full actual Log On page, this function checks the responseText for the existence of an HTML <title> of “Log On” (the default log on page title in an MVC app).
  2. A redirect is in the process of being attempted by the controller, likely due to an authentication timeout. Since an AJAX response is usually expecting a JSON return value, and since the redirect is attempting to return a full redirect (302) info page, this function checks the responseText for the existence of an HTML <title> of “Object moved” (the default 302 page title).
  3. If a session timeout occurred, the value “_Logon_” should be returned by the controller action handling the AJAX call. The above action filters check to see if the session variable “UserName” is null, which would indicate a session timeout, but not necessarily an authentication timeout.

This function also expects an AJAX action handler called TimeoutRedirect, on the Home controller. If you use a different controller or action, you’ll need to modify the URL specified in the function. The parameter, data, should be the response from an AJAX call attempt.

function checkTimeout(data) {
    var thereIsStillTime = true;

    if (data) {
        if (data.responseText) {
            if ((data.responseText.indexOf("<title>Log On</title>") > -1) || (data.responseText.indexOf("<title>Object moved</title>") > -1) || (data.responseText === '"_Logon_"')) thereIsStillTime = false;
        } else {
            if (data == "_Logon_") thereIsStillTime = false;
        }

        if (!thereIsStillTime) {
            window.location.href = "/Home/TimeoutRedirect";
        }
    } else {
        $.ajax({
            url: "/Home/CheckTimeout/",
            type: 'POST',
            dataType: 'json',
            contentType: 'application/json; charset=utf-8',
            async: false,
            complete: function (result) {
                thereIsStillTime = checkTimeout(result);
            }
        });
    }

    return thereIsStillTime;
}

The Forced AJAX Attempt

There may be times you want to check for a timeout scenario even if your app doesn’t require an AJAX call. That’s why the function is written so that if no parameter is passed in, a simple AJAX call will be made, forcing communication with the server in order to get back session and authentication information, so we can see if a timeout had occurred. There’s no way a browser would know this information until communication with the server is attempted. Once that AJAX call is made, this function will call itself with an actual data value that can now be interrogated.

Client-Side Calling Code Sample

The function returns true if no timeout occurred yet. We simply execute our callback logic if the result of this call is true (no timeout occurred):

$.ajax({
    url: "/MyController/MyAction",
    type: 'POST',
    dataType: 'json',
    data: jsonData,
    contentType: 'application/json; charset=utf-8',
    success: function (result) {
        if (checkTimeout(result)) {
            // There was no timeout, so continue processing...
        }
    },
    error: function (result) {
        if (checkTimeout(result)) {
            // There was no timeout, so continue processing...
        }
    }
});

Again, if you want to check for a timeout where no AJAX call is needed, such as for a click event when the user is navigating a list box, just call checkTimeout() with no parameter. Just note that a simple AJAX call will be injected, so be aware of potential performance impacts, and don’t overuse this. Also, be aware that some browsers, such as IE, will automatically cache AJAX results, and the call may not be made (and, therefore, the timeout check won’t occur). You may have to turn off AJAX caching ($.ajaxSetup({ cache: false })) in this case.

If you have any improvements on this, please post a comment. I’m always looking to tweak this. Thanks.

image_pdf

Wrestling With the Telerik MVC Grid Control (Part 3)

In part 2 of this series on the Telerik MVC Grid control, we discussed the back-end code for supporting the master level of our grid. Here’s a list of tasks we need to take care of for the detail grid:

  1. Implementing the detail view withing the grid component definition.
  2. Implementing additional JavaScript functions to handle the detail grid events.
  3. Implementing a View Model to support the detail grid.
  4. Implementing several controller actions to support grid CRUD functionality.
  5. Implementing helper methods.

If I don’t list all the code below (mainly, the controller actions), you can get all of it by downloading the full example, or keep up with any changes on GitHub.

Extending the Grid Declaration in the View

Realize that the detail grid generate detail grids (plural) at runtime, for each expanded master row. The way the detail level of a grid is handled, it’s pretty much another sophisticated “client template” hanging off the master row, built from another grid. That’s why the whole definition is wrapped in a ClientTemplate option:

.DetailView(details => details
    .ClientTemplate(Html.Telerik()
        .Grid<OrderViewModel>()
            .Name("Orders_<#= CustomerId #>")

Note the very explicit name we’re giving to each detail grid instance (via the Name option), making use of the master row’s CustomerId value. You’ll see its importance later on.

We’ll specify the detail columns next, starting with a column that contains our edit and delete buttons. Notice that we made sure only the DatePlaced column is filterable. In order to allow filtering at all, you must first apply this option to the grid (shown later), and then explicitly turn off filtering for the columns you don’t want it for. We’re also specifying a format for the DatePlaced column, and overriding some default column titles:

.Columns(columns =>
{
    columns.Command(commands =>
    {
        commands.Edit().ButtonType(GridButtonType.Image);
        commands.Delete().ButtonType(GridButtonType.Image);
    }).Width(80);

    columns.Bound(o => o.DatePlaced)
        .Format("{0:MM/dd/yyyy}");
    columns.Bound(o => o.OrderSubtotal)
        .Title("Subtotal")
        .Filterable(false);
    columns.Bound(o => o.OrderTax)
        .Title("Tax")
        .Filterable(false);
    columns.Bound(o => o.OrderTotal)
        .Title("Total")
        .Filterable(false);
    columns.Bound(o => o.OrderChannelName)
        .Title("Channel")
        .Filterable(false);
})

Similar to what we did in the master grid for customers, we’re going to want to support inserting new rows for orders at the detail level:

.ToolBar(commands =&gt; commands.Insert()
    .ButtonType(GridButtonType.ImageAndText)
        .ImageHtmlAttributes(new { style = "margin-left:0" }))

As in the master grid, we need to specify the DataBinding options; declaring the AJAX actions that the grid will call when performing CRUD operations on the detail rows. We’re also passing in customerId, since that’s needed for each method.

  • In the Select method, the customerId is used for deciding which customer to load the orders for.
  • In the Insert method, the customerId is used for deciding which customer to add a new order for.
  • In the Update method, the order is an Entity Framework navigation property of a customer, so customerId is used for fetching the customer.
  • In the Delete method, the order is an Entity Framework navigation property of a customer, so customerId is used for fetching the customer.
.DataBinding(dataBinding => dataBinding.Ajax()
    .Select("AjaxOrdersForCustomerHierarchy", "Home", new { customerId = "<#= CustomerId #>" })
    .Insert("AjaxAddOrder", "Home", new { customerId = "<#= CustomerId #>" })
    .Update("AjaxSaveOrder", "Home", new { customerId = "<#= CustomerId #>" })
    .Delete("AjaxDeleteOrder", "Home", new { customerId = "<#= CustomerId #>" }))

Now, since orderId uniquely identifies an order, we need to specify that as a DataKeys parameter used by both the Update and Delete methods:

.DataKeys(keys => keys
    .Add(o => o.OrderId)
        .RouteKey("OrderId"))

We’ll wire up our grid events next (discussed later):

.ClientEvents(events => events
    .OnError("onError")
    .OnDataBound("onDataBoundOrders")
    .OnEdit("onEditOrders"))

We’ll finish off our grid definition by making it pageable, with 15 rows per page, support keyboard navigation, specify that the detail grid is editable using a popup window, and making it sortable and filterable (keeping in mind that we shut off most filtering at the column level). Note that since this is actually a ClientTemplate, the whole detail grid needs to converted to an HTML string. Finally, we need to tack on a Render command, otherwise the grid won’t get displayed at all. For some reason, some examples on Telerik’s site omit this.

        .Pageable(pagerAction => pagerAction.PageSize(15))
        .KeyboardNavigation()
        .Editable(editing => editing.Mode(GridEditMode.PopUp))
        .Sortable()
        .Filterable()
        .ToHtmlString()
    ))
.Render();

Slight Detour — Fixing a Validation Bug in the Master Grid

Before we get to the supporting detail grid code, I want to revisit an issue I alluded to in part 2. Again, here is the CustomerViewModel:

public class CustomerViewModel
{
    [ScaffoldColumn(false)]
    public int CustomerId { get; set; }

    [Required]
    [DisplayName("Account Number")]
    public string AccountNumber { get; set; }

    [Required]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, FirstName, MiddleName", 
    		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [Required]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, LastName, MiddleName", 
		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    [DisplayName("First Name")]
    public string FirstName { get; set; }

    [DisplayName("Middle Name")]
    [Remote("CheckDuplicateCustomerName", 
    		"Home", 
    		AdditionalFields = "CustomerId, LastName, FirstName", 
    		ErrorMessage = "This name has already been used for a customer. Please choose another name.")]
    public string MiddleName { get; set; }

    [DisplayName("Middle Initial")]
    public string MiddleInitial { get; set; }
}

If you recall, I mentioned that any fields we mark with the [ScaffoldColumn(false)] attribute will not be displayed in the grid nor on the pop-up edit dialog used when we edit or add a customer. But there’s an additional side effect to us using this on the CustomerId field — our remote validation CheckDuplicateCustomerName method always returns a duplicate error, even if we’re editing an existing record. We’re passing CustomerId as an AdditionalFields field because we’re using it to allow us to ignore a duplicate error if the existing record is the current customer record. But, as it turns out, since we’re using [ScaffoldColumn(false)], it also hides CustomerId from the AdditionalFields parameter. Null is being passed into the validation method. So we have to do two things:

  1. Remove [ScaffoldColumn(false)] from CustomerId in the view model. Unfortunately, this causes CustomerId to be editable in the pop-up edit and add dialogs. So, we also need to…
  2. …add the following line to the onEditCustomers JavaScript function (the OnEdit master grid event handler):
$(e.form).find("#CustomerId").closest(".editor-field").prev().andSelf().hide();

Now we’ve forced CustomerId off of the pop-up, yet we can continue to use it in our remote validation method.

Detail Grid Events

Now that we solved that issue, here are the event handlers for the detail grid. I’m also including the replaceDeleteConfirmation helper function that’s shared with the master grid. The OnError event handler is also reproduced here, since it’s shared by both the master and detail grids as well. We’re using onError to display serious issues (normally caught in the catch blocks in controller actions), that we’re stuffing in the response header. I’d normally handle these more gracefully, but this is fine for a “quick & dirty”:

function onExpandCustomer() {
    $(".t-detail-cell").css({
        "padding-left": "80px",
        "padding-bottom": "30px"
    });
}

function onDataBoundOrders() {
    $(this).find(".t-grid-add").first().text("Add new Order").prepend("&lt;span class='t-icon t-add'&gt;");
    replaceDeleteConfirmation(this, "Order");
}

function onEditOrders(e) {
    var popup = $("#" + e.currentTarget.id + "PopUp");
    var popupDataWin = popup.data("tWindow");

    popup.css({ "left": "700px", "top": "400px" });
    //popupDataWin.center(); // Use if you'd rather center the dialog instead of explicitly postion it.

    if (e.mode == "insert")
        popupDataWin.title("Add new Order");
    else
        popupDataWin.title("Edit Order");

    var url = '@Url.Action("GetOrderChannels", "Home")';
    var orderChannel = $('#OrderChannelId').data('tDropDownList');
    orderChannel.loader.showBusy();

    $.get(url, function (data) {
        orderChannel.dataBind(data);
        orderChannel.loader.hideBusy();
        orderChannel.select(function (dataItem) {
            if (e.mode == 'edit') {
                return dataItem.Value == e.dataItem['OrderChannelId'];
            } else {
                return dataItem.Value == 1; // Default to Phone.
            }
        });
    });
}

function replaceDeleteConfirmation(item, itemType) {
    var grid = $(item).data('tGrid');

    $(item).find('.t-grid-delete').click(function () {
        grid.localization.deleteConfirmation = "Are you sure you want to delete this " + itemType + "?";
    });
}

Note that dynamically changing the “Add” button text has to be done differently for the detail grid than we did for the master. If you recall, we were able to change the button text for the master grid directly in the $(document).ready function. That’s because the button only exists once in the entire page. But since each master row requires its own “Add” button for adding orders, we have to change the button text as we databind the order rows for each customer, in the onDataBoundOrders event handler for OnDataBound. We’re also dynamically changing the “Delete” confirmation text in this function.

The other interesting function is the event handler for OnEdit, onEditOrders. We’re explicitly positioning the pop up dialog here, by first grabbing a reference to the pop up. You’l notice that we’re referencing the event parameter currentTarget.id. This is a reason why it’s important to uniquely name each detail grid, as mentioned earlier.

var popup = $("#" + e.currentTarget.id + "PopUp");

Once we have a reference to the pop up dialog, we need to grab a reference to its window (yes, although it appears redundant, the window is just a portion of the entire dialog).

var popupDataWin = popup.data("tWindow");

Now that we have a reference to each, we can dynamically position the pop up, either explicitly, or centering it by calling the undocumented center function of its window:

popup.css({ "left": "700px", "top": "400px" });
//popupDataWin.center(); // Use if you'd rather center the dialog instead of explicitly position it.

We’re also dynamically changing the pop up dialog’s title bar, depending upon the edit mode:

if (e.mode == "insert")
    popupDataWin.title("Add new Order");
else
    popupDataWin.title("Edit Order");

Using an Editor Template

Next, since we’re using a drop down list for the order channel, we’re dynamically populating the list. First, we build out the action we’re going to call via AJAX. Next, we create a reference to the drop down list. The next line of code displays an animated progress indicator for the AJAX call, which follows. Once the AJAX call completes, we bind the result to the list, get rid of the progress indicator, and initialize the currently selected order channel in the list:

var url = '@Url.Action("GetOrderChannels", "Home")';
var orderChannel = $('#OrderChannelId').data('tDropDownList');
orderChannel.loader.showBusy();

$.get(url, function (data) {
    orderChannel.dataBind(data);
    orderChannel.loader.hideBusy();
    orderChannel.select(function (dataItem) {
        if (e.mode == 'edit') {
            return dataItem.Value == e.dataItem['OrderChannelId'];
        } else {
            return dataItem.Value == 1; // Default to Phone.
        }
    });
});

The above loading of the order channel drop down implies the use of an editor template. We told the view to use an editor template for the Channel property by applying the [UIHint("OrderChannel")] attribute to it. Here’s the template code we’re using when displaying the editor pop up (which must be named OrderChannel.cshtml in order for the view and UIHint to find it):

@(Html.Telerik().DropDownList()
        .Name("OrderChannelId")
        .HtmlAttributes(new { style = "width:400px" })
)
<p />

We happen to be making use of Telerik’s drop down list in this same MVC extension library. If you have experience with this control, you may be wondering why we didn’t make use of the DataBinding method to load the channels into the list. Unfortunately, by the time the data is loaded, it’s too late to initialize the selected item. Therefore, we’re explicitly making the AJAX call within the onEditOrders event handler.

Here’s the order view model. Also note that I’m not validating DatePlaced, aside from making it a required field. I leave that as an exercise for you:

public class OrderViewModel
{
    [ScaffoldColumn(false)]
    public int OrderId { get; set; }

    [ScaffoldColumn(false)]
    public int CustomerId { get; set; }

    [Required]
    [DisplayName("Order Placed")]
    [DataType(DataType.Date)]
    public DateTime? DatePlaced { get; set; }

    [Required]
    [DisplayName("Subtotal")]
    public decimal? OrderSubtotal { get; set; }

    [Required]
    [DisplayName("Tax")]
    public decimal? OrderTax { get; set; }

    [Required]
    [DisplayName("Total")]
    public decimal? OrderTotal { get; set; }

    [ScaffoldColumn(false)]
    public int OrderChannelId { get; set; }

    [DisplayName("Channel")]
    [UIHint("OrderChannel")]
    public string OrderChannelName { get; set; }
}

Some Final Cosmetic Touches

There is one more event handler I added to show how you can dynamically position the detail grid. First, I added a declaration for an additional master grid event handler, OnDetailViewExpand. You may have seen a previous article I wrote on how to take advantage of this event in other ways:

.OnDetailViewExpand("onExpandCustomer")

This event handler simply adjusts the padding of the detail cell (which is actually the detail order grid for a customer):

function onExpandCustomer() {
    $(".t-detail-cell").css({
        "padding-left": "80px",
        "padding-bottom": "30px"
    });
}

Sample Application Download

Well, that completes my three part series. Again, you can download a full sample application, or keep up with possible changes on GitHub.

That’s the basics for creating a master / detail Telerik MVC grid, with a few extras thrown in to show you how to work around some idiosyncrasies. You can pretty much add additional detail levels in the same manner. Like I’ve mentioned, this is not the only way to go about it, but it has worked for me. If you have other ideas, please let me know.

image_pdf

Technical User Groups – The Tribe of Passionate Geeks

Happy 10th Birthday, INETA!The bug got me in the mid-70s. My math teacher in junior high, Mr. Blumenfeld, introduced us to a fascinating contraption on a tall stool that appeared, at first glance, to be an adding machine of some sort. But the thing was programmable, and came with this very nifty manual showing all the instructions you can program into it. I was mesmerized. He’d pull out the machine once a week and give a lesson on it. But an incident by a couple of students led him to punish the entire class and terminate those lessons. It was pretty devastating, especially since it had triggered a passion that has stayed with me through now.

It wasn’t until I entered high school two years later that I got my first taste of a “real” computer. I was introduced to BASIC by my programming teacher, Mr. Saperstein, on the Wang and Olivetti desktop machines. I strongly preferred the Olivetti, because it was a lot sleeker than the Wang, which was very “terminal” and plastic looking, and just looked a lot older. If I recall correctly, the Olivetti machine had a brownish casing, and seemed more modern. I made sure I started my projects on that machine so I had to be allowed to continue using it every class, since the disks where my projects were saved couldn’t be swapped between machines. We also had a Commodore Pet, but although the keyboard with all the strange graphic characters was interesting, students pretty much ignored that machine for some reason.

The first real program I wrote for class, of course, was a baseball simulation game, since I was always a huge fan. I spent hours at home creating dice games using stats from books, crunching numbers on the $100 calculator I got as a gift for my Bar Mitzvah (and which I STILL have to this day). That first programming project gave me an unbelievable feeling — to be able to create something out of nothing was so empowering!

I desperately wanted something to program at home. I wanted a home computer, but nothing was really available yet in the mid to late 70s (at least what I was aware of). But one day I noticed at a Consumers Distributing store that they were selling a programmable Texas Instruments calculator (TI-45?). When I finally saved up enough ($200?), I walked two miles to the store to buy it. I still have this somewhere. I came across it, along with its manual as I was cleaning out some old junk recently, but I have no idea where I placed it since.

In January 1980, a couple of months after I started dating my future wife, my parents gave me a choice. I can either go to Disney World with the rest of the family, or I can have my first real personal computer — a TRS-80 Model 1, with 4k of RAM and Level 1 BASIC. It was a no-brainer. First, the computer lasted a lot longer than the trip, and second (and more importantly), I had just started going out with Lorri, and I didn’t want to go away. This was one of the easiest and best decisions I’ve ever made. I’ve never looked back from either benefit.

In the early 80s, the vast majority of my time was spent with Lorri or the TRS-80. One day, while working as a keypunch monitor / programming tutor at Brooklyn College, a friend (and fellow TRS-80 user) came over to show me the 80-Micro magazine he subscribed to for the TRS-80. As I started skimming through it, I was shaking so strongly from excitement, it must have been visible to all those around me. This brought my little computer to a whole new level. I was introduced by the “community” of users to so many things I didn’t realize the machine was capable of.

Although this wasn’t actually a user group experience, it was my first taste of what being part of a larger community of “tribe members” felt like. I had discovered that there were many other people out there who shared my passion, and who I could learn from. It was addicting. I devoured everything about the computer, and all other computers in my life from that point forward.

It wasn’t until the late 80s that I had my first exposure to a real user group. A friend of mine brought me to the Clipper User’s Group at MLK High School on Amsterdam Avenue in NYC. Wow — these are my people! I was hooked. I went to every meeting from that point forward for several years, while Clipper was my primary programming tool, and met some amazing people. That group also kick-started me on getting my first commercial product released, when my business partner and I did our first demo at one of the meetings.

That group also became the model for regular team meetings I held for my consulting company throughout the 90s. We were all passionate about programming, and it was a way for us to get together to learn and discuss technology for technology’s sake — not just in a work environment.

When I closed up my company, and started working at my next job, I finally got involved full-time with Microsoft technologies. But because I worked far from NYC, I rarely went to user group meetings anymore. Occasionally, I’d attend a developer’s conference or something. It was a rare, exhilarating experience being around like-minded people in a learning environment. It continued to stoke the flames of my passion for programming. But as some of you may know from earlier blog posts, my career started to move away from programming into management during the mid to late 2000s. I’d try to get to .NET user group meetings in NYC, and still try to get to developer conferences (on my own dime, now that I was in management) as often as possible. But I started to feel like an outsider. I still felt like the attendees and speakers were part of my tribe, but I started to feel like an impostor.

Since I was unhappily in management, I needed to do something to keep up with the development world, as well as stay connected to what I considered my tribe. I did find a user group close to work, in Stamford, CT, but it was held with less and less consistency. It was getting more difficult for me to make it to user group meetings in NYC, but when I attended one in late 2006, I asked one of the leaders of the NYC .NET Developer’s Group, Bill Zack, if I could make an announcement. Although I had absolutely no idea yet how and where I could pull this off, I announced that I wanted to start a .NET user group up in Westchester, near where I lived. I mentioned that I’d make further announcements when it became a reality, and asked people to contact me if they’d be interested in a group in that area. It was after this meeting that Bill and Peter Laudati, the Microsoft Developer Evangelist in the area, made me aware of INETA (which is celebrating their 10th anniversary this month). They gave me some suggestions, including resources from INETA, for getting the group started.

I dug through the material, and my wife and I started looking for a local venue that could support such a group for free (since user groups are all virtually free for their members). We had very little luck. Most places wanted to charge, or required liability insurance. Sadly, I put off this idea for a while, still trying to get to the NYC meetings as often as possible.

In May of 2007, I was contacted by Bill Zack, who mentioned that Louis Edouard (who used to run the Stamford group at UConn, but was finding it overwhelming to do it all himself), and Leo Junquera, a fellow geek from the local business community, were looking to restart the group at UConn. They were looking for a third volunteer to help with the rebirth. I jumped at the opportunity. Since it was held close enough to Westchester as well, and was easier for local folk to get to rather than NYC, we decided to restart the group under a new name: The Fairfield / Westchester .NET User Group. We had our first meeting in June, 2007, and the rest, as they say, is history.

Since then, I’ve been heavily involved in running eight code camp events and two monthly user groups (we also run a SQL Server group at UConn). It’s been incredible. Not only am I living the dream of actively participating in a community of like-minded “geeks,” it’s also given me the confidence to get back into full-time development, and I no longer feel like an outsider impostor. This is my tribe, and we share a common passion. We love to learn, and our interest in development lives well beyond the walls of work.

Happy 10th birthday to INETA, the group which is instrumental in building our tribe of passionate geeks! If you’re a like-minded individual, and you really care about this field (and your career), I urge you to join a local user group. You can find one near you by checking the listings on the INETA site.

image_pdf

Restoring Expanded Row State with Telerik’s MVC Grid Control

*** Edited on January 30, 2012 – Forgot that FireFox does not support innerText, so I replaced all references to our best friend jQuery’s text() method. ***

Slight detour… I know I’m behind on posting part three of my series on wresting with Telerik’s MVC Grid control, but a lot has happened since I posted part two. One is Telerik’s release of Kendo UI. There’s been some concern (myself, included) that Kendo UI may eventually replace the Telerik MVC extensions (since there are plans to include some MVC-specific server-side wrappers), but they’re currently denying that. I still plan on posting that third article for completeness sake, but I’m not 100% sure of the MVC extensions future.

Also, the more I use it, and the more comfortable I am working down at the metal with JavaScript, HTML, CSS, and jQuery (not metal, but still a de facto standard), the less enamored I am of such a library. Sure, it provides a useful layer of abstraction, much in the same way ASP.NET Web Pages provides a layer of abstraction for building entire sites. But as my series of articles implies, if you want go deeper than what the abstraction provides, you end up fighting against the tool, and it no longer feels right.

The Challenge

I came upon such a situation today. Maybe I missed something, but struggling so mightily with what I’d expect to be handled by a simple attribute, makes me feel the tool is just getting in the way. I had a request from a client for a master / detail grid to retain its expansion state between operations, such as adding, editing, and deleting rows at the master level. My initial thought was, “This should be easy — it’s probably just a parameter I need to check.” Yet, my experience with the grid gave me an uneasy feeling that it wasn’t going to be so simple.

The First Attempt

So, after not finding such a simple setting, I did some web searching. Of course, the two main resources found were Telerik’s forums (they do have pretty good support), and StackOverflow. Not much there, aside for some related issues which gave some hints. It made sense to hook into the grid’s client events and API. It seemed obvious that I’d need to hook into the OnDetailViewExpand and OnDetailViewCollapse events to keep track of the master row expansion state. So knowing where to capture this info was easy. But before knowing what to capture, I had to see what’s needed to restore the state after a data refresh.

That’s what cost me hours poring through a deep hierarchy of properties. Here’s the issue: I figured I needed to call the expandRow API method to restore the previously expanded rows. This method requires a jQuery object parameter representing the master row. Here’s their documentation:

var grid = $("#Grid").data("tGrid");
// get the first master table row
var tr = $("#Grid tbody > .t-master-row:eq(0)"); // use .t-master-row to select only the master rows

// expand the row
grid.expandRow(tr);

Ok, simple enough, although this is an example of exposing too much gunk under the hood — without their sample code, you’d have to do a lot of trial and error to figure out exactly which Telerik classes to use in order to select the pieces that make up their controls. Ok, ignoring that, it still should have been straightforward. It seemed the good news was that both OnDetailViewExpand and OnDetailViewCollapse received an event parameter giving us access to a masterRow jQuery object field. Since the expandRow call requires such a parameter, I thought I was set.

Nope. Their example worked fine, selecting the row via $(“#Grid tbody > .t-master-row:eq(0)”). But using the retained masterRow reference, absolutely nothing happened. No JavaScript errors, mind you. Just no expansion.

I spent way too much time inspecting every inch of these two seemingly identical structures, single-stepping through both Telerik’s and jQuery’s JavaScript code, and not seeing anything significantly different. Finally, I decided to take a different approach that we’ll discuss below.

The Successful Solution

As mentioned, we’re going to have to hook into the OnDetailViewExpand and OnDetailViewCollapse grid events so we can capture the state of each row. We’ll also need to hook into the OnRowDataBound grid event, because as we traverse through the rows as we bind, we’re going to expand the rows that were previously expanded. Here’s the section of the Razor view page where we define the event hooks we’re going to make use of:

.ClientEvents(events => events
    ...
    ...
    ...
    .OnRowDataBound("onRowDataBound")
    .OnDetailViewExpand("onExpand")
    .OnDetailViewCollapse("onCollapse"))

We’re going to use an array to track the expanded rows. I wrote a few utility functions to support the array:

  • addToArray, which we’ll use on expand, and will only add an element if it doesn’t already exist.
  • removeFromArray, which we’ll use upon collapse.
  • isInArray, which returns true if the element already exists.

You may know a better way to handle this in JavaScript, but these generic and reusable functions work nicely, so we’ll go with it.

The rest of the code is just as simple:

  • Since the innerText of each row should be unique, we’re going to use that as the value we push onto the array in the onExpand handler. We can get this value through a field of the masterRow event argument.
  • In the onCollapse handler, we’re going to remove this value from the array. We can also make the same call before deleting a master row from the grid, but I don’t show that here.
  • Finally, in the onRowDataBound handler, we first need to grab a reference to the master grid. We check if the innerText value is already in the array, and if so, we make a call to expandRow, passing in the row event argument. For some reason, this works fine, although as I mentioned earlier, using supposedly the same object type (the entire e.masterRow event argument) doesn’t work. *Shrug*
var expandedRows = [];

function onExpand(e) {
    addToArray(expandedRows, $(e.masterRow).text());
}

function onCollapse(e) {
    removeFromArray(expandedRows, $(e.masterRow).text());
}

function onRowDataBound(e) {
    var grid = $("#CustomerGrid").data("tGrid");
    if (isInArray(expandedRows, $(e.row).text())) grid.expandRow($(e.row));
}

function addToArray(arr, value) {
    for (var i = 0; i < arr.length; i++) {
        if (arr[i] === value) return;
    }

    arr.push(value);
}

function removeFromArray(arr, value) {
    for (var i = 0; i < arr.length; i++) {
        if (arr[i] === value) {
            delete arr[i];
            return;
        }
    }
}

function isInArray(arr, value) {
    for (var i = 0; i < arr.length; i++) {
        if (arr[i] === value) return true;
    }

    return false;
}

Alternatives

I’m becoming less of a fan of these kind of libraries, where a different coding paradigm is used to try to “simplify” things. In my mind, keeping it simple means keeping to well-known paradigms, unless a new one has tremendous benefit. This is why I’m intrigued by a jQuery plug-in called DataTables. It uses basic JavaScript constructs to specify customizations.

Although not a grid, another powerful jQuery plug-in I’ve been using is a treeview control called DynaTree. Although it can be pretty complex, at least it takes advantage of straightforward, standard JavaScript and jQuery constructs.

Upon cursory review, Kendo UI appears to take a more bare-bones approach as well, taking advantage of HTML5, JavaScript, and CSS, allowing for more control, popular for us ASP.NET MVC fans. I’ll look into this deeper, and perhaps write about it in the future. But for now, I’ve invested a lot in the Telerik MVC library for a large project, so I’m sort of stuck with it.

image_pdf