Author avatar

I am Jack Histon. My career would not be what it is today without dedication and hard work from software bloggers. My purpose is to give back to that online community. Here I write about programming, software development, architecture, and everything in-between.

Recent Posts

Archives


  • ASP.NET Core 2.0 - Repository Overview: Razor Pages

    Sunday, 24 September 2017

    Previously in this Series

    1. ASP.NET Core MVC - Repository Overview: Model Binding
    2. ASP.NET MVC Core - Repository Overview: Value Providers
    3. ASP.NET Core 2.0 - Repository Overview: Action Discovery
    4. ASP.NET Core 2.0 - Repository Overview: Action Selection

    Contents

    Introduction

    This article is the fifth in a series I'm dedicating to reviewing the code and design of the ASP.NET Core GitHub repository. The series tries to explain the underlying mechanisms of ASP.NET Core, and how all the code fits together to create the framework as we know it at the published date.

    This article will discuss Razor Pages within ASP.NET Core 2.0. The previous article discussed how a specific action is selected, given a set of action descriptors found on start up. This article explains how to use razor pages, and how it fits into this existing ecosystem.

    What is Razor Pages?

    Many features have been given to us in the latest ASP.NET Core 2.0 release. Part of that release is the brand new feature Razor Pages. Razor Pages is advertised as being useful for page-focused scenarios, where little to no real logic is needed. For example, an about page where the page content is neither dynamic, or based on conditional logic. Or a contact page, where details of a simple form is to be filled out by a user.

    Razor Pages consists of a back-end code file, paired with a view for C# razor code. If you're familiar with the classic ASP.NET Web Forms, then Razor Pages will give you a feeling of déjà vu. Razor Pages feels like ASP.NET Web Forms, but with a modern approach.

    Razor Pages has access to many existing mechanisms of the ASP.NET Core repository. Access to existing features is possible due to Razor Pages and MVC sharing code implementation. Mechanisms such as the razor engine itself, that is, writing C# code within the view, is identical to the original methods used in MVC. This allows the use of Tag Helpers, View Components, and more that the razor engine has to offer.

    Razor Pages and MVC can be used together. Routing uses the same process of action discovery in MVC, as described in my previous article in the series, as it does in Razor Pages.

    Razor pages uses the notion of "handlers". A handler is similar to an action on an MVC controller. Handlers on razor pages, and actions on controllers, are inserted into the same collection. And so selecting a handler to run is performed with all routes considered.

    How to setup Razor Pages

    Setting up an application to use MVC is a two step process. The ASP.NET Core framework has two startup methods that are called by reflection. These are the ConfigureServices and Configure methods. The methods are used to setup the application dependency injection system, and the middleware pipeline, respectively:

    
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
    }
    
    public void Configure(IApplicationBuilder app)
    {
        app.UseMvc();
    }
    
    

    It was a conscious effort by the ASP.NET Core team to make it easy to use Razor Pages and MVC together. Therefore, to start using Razor Pages, you have to add nothing more to the startup process than what should already be there if using classic MVC.

    So please, stand down if you thought this section was going to be riddled with complication.

    If you would like to know more about how these startup methods are executed, refer to the ASP.NET Hosting GitHub Repository, specifically the WebHostBuilderExtensions file where you call UseStartup.

    Creating your first Razor Page

    Razor Pages is designed to be simple. Razor Pages is there to be used by simple pages within your (mostly) static website.

    An example of a simple page is an "about me" page. An "about me" page is the iconic content that is used by companies and invididuals to describe themselves to their audience. It can have many details, such as where the entity comes from, their motivations, and details that might be interesting to the intended audience. However, all of this is static content. This is prime real-estate for a razor page.

    To create your first page, you need to provide a place to store them. Razor pages are by default stored under "/Pages", at the root of your application directory. This can be changed, by providing a startup configuration:

    
    services
        .AddMvc()
        .WithRazorPagesRoot("/MyRazorPages");
    
    

    With this change, the razor pages engine will search for all razor pages under the "MyRazorPages" folder. This can enable the use of custom root directories, where your application may already have a use for the default "/Pages" folder. Also, allowing customisation of the root directory allows two or more applications to share razor pages. For example, two websites may exist under the same umbrella of a corporation, and the about me sections could have identical content.

    The extension method "WithRazorPagesRoot" is declared in the MvcRazorPagesMvcBuilderExtensions class. This class houses common extensions that can customise how razor pages works. Reading the code for this class, you can see that customising the root directory can be achieved in a different way:

    
    services
        .Configure(options => options.RootDirectory = "/MyRazorPages");
    
    

    This code skips the middle-man, and shows how razor pages works by directly modifying the razor pages' options. This sheds light on where your root directory value is used. It is stored on a static, and global configuration class: RazorPagesOptions. This means that these options are available to the entire dependency injection system, that is, any class within the ASP.NET Core framework (and your own code).

    The RazorProjectPageRouteModelProvider Class

    A good example of the use case for the root directory setting is within the RazorProjectPageRouteModelProvider class. The classes primary purpose is to provide route models for the razor pages found in your root directory.

    Reading the classes code, it becomes apparent what we need to provide for our "about me" page.

    The codes first directive is to skip any file that starts with an underscore:

    
    if (item.FileName.StartsWith("_"))
    {
        // Pages like _ViewImports should not be routable.
        continue;
    }
    
    

    This means for our razor page, it should not be prefixed with an "_". So something like "AboutMe" would be a good name.

    The second directive in the code shows that we also need the file to be a .cshtml file, and be marked with @page at the top of it:

    
    if (!PageDirectiveFeature.TryGetPageDirective(_logger, item, out var routeTemplate))
    {
        // .cshtml pages without @page are not RazorPages.
        continue;
    }
    
    

    With these rules in mind, we end up with a razor pages file similar to:

    
    @page
    
    <h1>About me</h1>
    <p>This is my simple razor page</p>
    
    

    This content would then need to be placed in a file called "AboutMe.cshtml" under the root directory (by default "/Pages").

    Once the AboutMe.cshtml file is in place, there is nothing else we need to do. The ASP.NET Core framework will explore your pages root directory, find the file, and route to the page using the pages name. Therefore, to access our brand new AboutMe.cshtml file as a url, navigate to the "/AboutMe" path in your favourite web browser.

    As I have said previously, razor pages is designed to be simple and straight-forward. In this example, there is no need even for a code-behind file, as there is no logic. With just one file, we have been able to create routable content that provides a purpose.

    Razor Pages and Forms

    Sometimes, a page needs more static content than a typical about me page. A "contact me" page is a good example of interaction with the end user. Generally, you need to collect the users details, and a message that the user would like to give.

    Here is the view of a typical contact us razor page:

    
    @page
    @model MyApplication.Pages.ContactUsModel
    
    <form method="post">
        <div asp-validation-summary="All"></div>
        <div>
            <label asp-for="Name"></label>
            <div>
                <input asp-for="Name" />
                <span asp-validation-for="Name"></span>
            </div>
        </div>
    
        <div>
            <label asp-for="Message"></label>
            <div>
                <input asp-for="Message" />
                <span asp-validation-for="Message"></span>
            </div>
        </div>
    
        <div>
            <button type="submit">Save</button>
        </div>
    </form>
    
    

    In this contact us page, we can see that we have declared the necessary @page directive, named the file without an underscore prefix, and with an .cshtml extension; all of these are prerequisites for a file to be classed as a razor page. This file is also placed under the root directory for razor pages (again, by default "/Pages").

    The code-behind file

    To provide value to the contact us page, there needs to be code that handles the posting of the form data:

    
    public class ContactUsModel : PageModel
    {
        [BindProperty]
        public string Name { get; set; }
    
        [BindProperty]
        public string Message { get; set; }
    
        public async Task<IActionResult> OnPostAsync()
        {
            if (!ModelState.IsValid)
            {
                return Page();
            }
    
            ...
    
            return RedirectToPage("/Index");
        }
    }
    
    

    This is shown as a code-behind file. Razor Pages is flexible enough to allow this to be declared in-line with the razor view itself. My preference is a separate file, for clarity. You can define an @functions area like so:

    
    @page
    @model ContactUsModel
    @functions
    {
        public class ContactUsModel : PageModel
        {
            ...
        }
    }
    
    <h1>About Me</h1>
    ...
    
    

    But generally, I prefere a separate file, as it makes it more clear the intended purpose of each file.

    The code behind file inherits from the PageModel class. This class provides many helper methods and properties that can help in the handler methods declared on your page's model. The inheritance is not mandatory, and draws a similar concept to inheriting the Controller class when coding in classic MVC.

    There are two properties defined in the file: Name, and Message. Each have an attribute declared called BindPropertyAttribute.

    What is this attribute for?

    The attribute is an implementation of the IModelNameProvider interface. This allows us to specify the name of the metadata (for request binding) other than the property name.

    The BindPropertyAttribute also implements IRequestPredicateProvider interface. This allows the implementation of custom code to narrow down when a property is actually bound to. In this case, the BindPropertyAttribute allows binding only when it is not a GET request:

    
    private static bool IsNonGetRequest(ActionContext context)
    {
        return !string.Equals(context.HttpContext.Request.Method, "GET", StringComparison.OrdinalIgnoreCase);
    }
    
    

    This is overridable:

    
    [BindProperty(SupportsGet = true)]
    public string Name { get; set; }
    
    

    The Razor Page Application Model

    A general theme throughout this article, is that Razor Pages fits snugly into the same framework implementation as classic MVC. As discussed previously in this series, an application model is built up in order to provide routable data.

    The DefaultPageApplicationModelProvider class will take the model of your razor page, and with reflection, extract binding metadata. Binding metadata that is being provided by the handy bind property attribute.

    Controller metadata is extracted using the BindingInfo class; Razor pages is no different. Using the BindingInfo class, the application model provider extracts property metadata defined on the razor page model.

    Controller action metadata is extracted through MVC application model providers. Handlers of specific razor page requests are done in a similar fashion.

    The DefaultPageApplicationModelProvider class will populate "handler" methods for the application model. But what is a handler method?

    The page application model provider will retrieve all the methods from the page model:

    
    var methods = pageModel.HandlerType.GetMethods();
    
    for (var i = 0; i < methods.Length; i++)
    {
        var handler = CreateHandlerModel(methods[i]);
        if (handler != null)
        {
            pageModel.HandlerMethods.Add(handler);
        }
    }
    
    

    Using this code, we can understand how a method is deemed a "handler" in regards to razor pages (remember, a handler is similar to an action, as can be seen in the example for the OnPostAsync method above in our page model). The methods on the handler type are looped over, and a handler model is created for each. If this is successful, a handler method is added to the page model.

    This means that we can create a number of methods on our page model, and all could be deemed a "handler" if a handler model is successfully created for it. So what does the CreateHandlerModel method do?

    The CreateHandlerModel method first checks whether your method is a candidate for being a handler:

    
    if (!IsHandler(method))
    {
        return null;
    }
    
    

    To become a handler method, you need to pass the following criteria:

    • It can not be static
    • It can not be abstract
    • It can not be a constructor
    • It can not be generic
    • It has to be a public method
    • It can not have the "NonHandlerAttribute" attribute.

    All other method signatures are up for grabs. Our method "OnPostAsync" has the following declaration:

    
    public async Task<IActionResult> OnPostAsync()
    {
        ...
    }
    
    

    Let us go through the rules and make sure it is a candidate for becoming a handler. The method is not static, it is not abstract, it is not a constructor, is is not generic (It has not supplied it's own generic parameters), it is a public method, and we have not declared a "NonHandlerAttribute" on it. Therefore, we pass the first hurdle of the CreateHandlerModel method.

    The next step in the CreateHandlerModel method, is to try and parse the handler name and http method that it will be used for:

    
    if (!TryParseHandlerMethod(method.Name, out var httpMethod, out var handlerName))
    {
        return null;
    }
    
    

    If the system can not parse the handler method name, then it will not be added as a handler.

    A handler's name needs to be in a specific format to be accepted as a handler. The first rule is that the method name needs to start with "On":

    
    // Handler method names always start with "On"
    if (!methodName.StartsWith("On") || methodName.Length <= "On".Length)
    {
        return false;
    }
    
    

    Our method "OnPostAsync" passes this test.

    The code following this, parses the rest of the method name, to find the http method that should be chosen, and the specific handler name to use. Without going into the code too much, valid names can be of the form OnGet, OnPost, OnPostAsync, OnGetTestAsync, etc.

    What is key here, is that the http method and the handler name are optional, and the Async suffix is completely irrelevant (and is ignored). If a http method is excluded, then the handler method will be considered for any http request. If a handler name is specified, then the route data needs to match against this value. You can not exclude both the name and http method from a handler. If you do so, then this will not be considered a valid handler method.

    So if we take our example of the "OnPostAsync" method, then this is a successful handler candidate, as it's name starts with On, defines the http method of POST, and has an ignored -Async suffix. Perfect for our view's html form.

    Dealing with Multiple Handlers

    We have defined how handlers are added to the page application model. However, there is nothing stopping us from having multiple handlers per razor page. What if we wanted to have two forms on the same page? what if we wanted to provide some sort of AJAX handler for a specific razor page?

    Defining multiple handlers is easy: just create two different methods following the name conventions outlined above. However, how do we define which form submit button corresponds to which handler?

    You can use the "asp-page-handler" tag helper to achieve this:

    
    <form method="post">
        <input type="submit" asp-page-handler="Test" value="Go to index" />
    </form>
    
    

    this will match up with a handler named "OnPostTest":

    
    public IActionResult OnPostTest()
    {
        return RedirectToPage("/Index");
    }
    
    

    All this does is redirect to an index page. What the "asp-page-handler" tag helper does, is evaluate to the following input element:

    
    <input type="submit" value="Go to index" formaction="/About?handler=Test">
    
    

    The key here, is the query string added to the formaction attribute.

    When selecting a handler to invoke, the PageActionInvoker class will use the DefaultPageHandlerMethodSelector class to choose an appropriate handler. It selects potential candidates by taking the action descriptor for the current context, and checking to see if it has any handler methods. If it does, then there are important checks that happen to select the best candidate.

    Firstly, if the handler specified a http method name, then it needs to match for the current request:

    
    if (handler.HttpMethod != null &&
        !string.Equals(handler.HttpMethod, context.HttpContext.Request.Method, StringComparison.OrdinalIgnoreCase))
    {
        continue;
    }
    
    

    If this passes, then next it checks the handler name. If no handler name was specified - e.g., "OnPostAsync" - then the handler will be a candidate regardless of the current value of the handler name in the http request. Otherwise, the name needs to match:

    
    else if (handler.Name != null &&
        !handler.Name.Equals(handlerName, StringComparison.OrdinalIgnoreCase))
    {
        continue;
    }
    
    

    So given these checks, let's have an example. A http web request arrives and with the handler name "DoStuff", and the http method "Post". Possible handler names that can be considered are:

    • OnPost - For a post request, but no handler name specified.
    • OnPostAsync - the async alternative to the previous name.
    • OnDoStuff - For any http method, but handles anything where the handler route value is equal to "DoStuff".
    • OnDoStuffAsync - the async alternative to the previous name.
    • OnPostDoStuff - For a http method of post, and handles anything where the handler route value is equal to "DoStuff".
    • OnPostDoStuffAsync - the async alternative to the previous name.

    If we had a page model that defined all of these, then we would get an ambiguous handler exception. That is why we have to perform an extra step, and assign a score to each of these handlers.

    The more specific a handler, the higher the score it is assigned:

    
    private static int GetScore(HandlerMethodDescriptor descriptor)
    {
        if (descriptor.Name != null)
        {
            return 2;
        }
        else if (descriptor.HttpMethod != null)
        {
            return 1;
        }
        else
        {
            return 0;
        }
    }
    
    

    If an action has a name, then it is automatically seen as the best candidate. If it doesn't specify a name, but specifies an http method, then it is less specific, but still defines something for the current http request context. Finally, if neither a name or method is given, then it is the least specific type of handler name. N.B., getting into this scenario should be impossible, as handler's need either a name, or a http method. If neither are provided, for example "OnAsync", then it would not be classed as a handler at all.

    Once we have measured each handler with a score, the highest score wins. So taking all the previous possible handlers and their scores:

    • OnPost - 1
    • OnPostAsync - 1
    • OnDoStuff - 2
    • OnDoStuffAsync - 2
    • OnPostDoStuff - 2
    • OnPostDoStuffAsync - 2

    As the framework does not know which handler out of OnDoStuff, OnDoStuffAsync, OnPostDoStuff, OnPostDoStuffAsync to choose from, as they are considered equal and the handler route value is "DoStuff", an ambiguous exception would be thrown. The solution here, is to reduce the number of handlers for this specific razor page (which is the directive for this new feature anyway: keep it simple stupid (KISS)).

    Getting rid of the handler query parameter

    Razor pages allows you to define your own route parameters within the @page directive of the razor page:

    
    @page "{handler?}"
    
    

    It is the job of the PageDirectiveFeature static class to extract this value.

    The "handler" route parameter is a preserved name for the handler route value. For example, the contact us page is located under the root directory: "/Pages/ContactUs.cshtml". If we placed the handler name in the routing, then we can create a handler that can be routed to without any query string. So given an "OnDoStuff" handler on the contact us page model, the corresponding url path would become "/ContactUs/OnDoStuff", rather than "/ContactUs?handler=OnDoStuff".

    This all works because of the DefaultPageHandlerMethodSelector class checking for a handler name both in the query string, and the route data:

    
    var handlerName = Convert.ToString(context.RouteData.Values[Handler]);
    
    if (string.IsNullOrEmpty(handlerName) &&
        context.HttpContext.Request.Query.TryGetValue(Handler, out StringValues queryValues))
    {
        handlerName = queryValues[0];
    }
    
    

    Without this, we couldn't create nice, SEO-friendly urls when it came to different handlers.

    Now we have multiple handlers per razor page, it resembles a controller and view system. Albeit, a simpler version (and that's the point).

    Summary

    Razor Pages should be seen as MVC's little brother. A younger brother who learns from it's older brother, but ultimately has a different goal in life. A goal that is far simpler (for now), and one that closely resembles the structure of the web application itself.

    A razor page can be stretched to the point where it looks like a controller/view relationship. But that is not it's intended purpose. You should strive for your razor pages to be simple, and to serve one purpose.

    MVC and Razor Pages share the same ecosystem, so using both at the same time is possible, even encouraged. A popular methodology that is taking traction, is to start using razor pages for your actual pages - e.g., about me, contact us, terms & conditions etc. - and controllers for API level control, such as AJAX, single-page application back-ends, etc.

    It is up to you to define the line for when to use Razor Pages, like anything in software, it requires a gut feeling. A rule of thumb I have been following, is if I can define a web page as a simple http GET request, then I could probably define this as a simple Razor Page.

    Thank you for reading, and I hope this helps on your journey with Razor Pages.

  • Making a Master Puppeteer

    Wednesday, 13 September 2017

    There are many ways to test a program. From starting miniscule with unit testing, to more grandiose UI testing. Puppeteer falls firmly in the latter category.

    Puppeteer advertises itself as

    a Node library which provides a high-level API to control headless Chrome over the DevTools Protocol

    That is a lot of words. Puppeteer is a walking advertisement of the true potential the Chrome DevTools Protocol has to offer. It oozes ease at the seems, and will leave you with a sense of UI testing in it's prime. All this and free!

    What is Puppeteer?

    Puppeteer is a UI automation tool. It achieves this by using the combination of headless chrome and the DevTools protocol. As the quote originally says, it is a higher level API that wraps this functionality, making certain UI test automations a breeze.

    The Chrome DevTools Protocol exposes a set of tools that are built into the famous google chrome. DevTools is essentially you hitting More Tools -> Developer Tools within your browser. Therefore, the DevTools Protocol is the wheels to your DevTools-i.e., you can now get programmatical with the DevTools in Chrome.

    Headless chrome is chrome without the chrome. Yes, you read that correctly. It allows you to interact with Chromium from an environment other than a browser-i.e., the command line.

    Bringing the power of Chromium and the Blink rendering engine to your command line opens many doors. The biggest use case, is automated testing.

    Installation

    Installation is easy, as it can be done with yarn or npm. Just run the following command:

    
    yarn add pupeeter
    # or "npm i puppeteer"
    
    

    This can then be required and ran by node like any nodejs application.

    Creating screenshots

    There are times when you want to test things such as CSS. Making sure that your website's look and feel has not regressed is important to check.

    For example, to take a screenshot of the front page of my blog:

    
    const puppeteer = require("puppeteer");
    
    (async() => {
        const browser = await puppeteer.launch();
        const page = await browser.newPage();
    
        await page.goto("http://jackhiston.com/");
        await page.screenshot({ path: "jackhiston-blog.png" });
    
        browser.close();
    })();
    
    

    The first thing to do here is to include the puppeteer dependency. With this you can launch a browser instance, and this could actually load a browser on screen as well like so:

    
    const browser = await puppeteer.launch({ headless: false });
    
    

    Note the headless option.

    With this you then create a brand new page like you would when navigating in a browser, and then you "goto" a specific url (in this case my home page).

    We then can use a built-in screenshot functionality to save an image of my home page.

    Crawling a website

    Another use case for Puppeteer is crawling a websites content. In the next example I'm navigating to hacker news and scraping all the links off the first page:

    
    const puppeteer = require("puppeteer");
    
    (async() => {
        const browser = await puppeteer.launch();
        const page = await browser.newPage();
    
        page.on("console", (...args) => console.log("PAGE LOG:", ...args));
    
        await page.goto("https://news.ycombinator.com", { waitUntil: "networkidle" });
    
        const links = await page.evaluate(() => {
            const anchors = Array.from(document.querySelectorAll(".storylink"));
            return anchors.map(anchor => anchor.textContent);
        });
    
        console.log(links.join("\n"));
    
        browser.close();
    })();
    
    

    One thing to note here is the page.evaluate function. This allows us to inspect the current page that we are on, as if we were in the DevTools area of chrome itself.

    Clicking Links and Navigating

    The final use case I want to showcase is navigation. In the following example I show how you can click a link on a page and wait for the page to finish loading, so as to record results:

    
    const puppeteer = require("puppeteer");
    
    (async() => {
        const browser = await puppeteer.launch();
        const page = await browser.newPage();
    
        await page.goto("https://news.ycombinator.com", { waitUntil: "networkidle" });
    
        await page.click("a.storylink");
    
        var response = await page.waitForNavigation({ waitUntil: "networkidle" });
    
        console.log(await page.title());
        console.log(page.url());
    
        browser.close();
    })();
    
    

    an important function here is page.waitForNavigation. This allows us to wait until the click event has fully loaded the new web page, as the click event promise will finish when only the click event has.

    This can be very useful for when navigating around, and testing a UI's user experience is intact.

    Summary

    The main focus of Puppeteer is to provide an API that can show off the capabilities of the DevTools protocol.

    Tools like Selenium are much more established, and offer cross browser testing as well. Puppeteer doesn't belong in the same grouping as Selenium.

    Puppeteer is just one example of many tools coming out around the headless chrome idiom. At the time of writing, there are now a lot of projects out there that use headless chrome. A good blog post that mentions some is here, by Ken Soh. Other places to look for existing projects using the DevTools protocol is here.

    Puppeteer is maintained by the Chrome DevTools team, and they are looking for contributions! So head on over and be a part of this new movement of headless chrome automation testing.

    Thanks for reading. Please share with friends.

    Useful Links

  • ASP.NET Core 2.0 - Repository Overview: Action Selection

    Saturday, 09 September 2017

    Previously in this Series

    1. ASP.NET Core MVC - Repository Overview: Model Binding
    2. ASP.NET MVC Core - Repository Overview: Value Providers
    3. ASP.NET Core 2.0 - Repository Overview: Action Discovery

    Contents

    Introduction

    This article is the fourth in a series I'm dedicating to reviewing the code and design of the ASP.NET Core GitHub repository. The series tries to explain the underlying mechanisms of ASP.NET Core, and how all the code fits together to create the framework as we know it at the published date.

    This article will discuss action selection within ASP.NET Core 2.0. The previous article discussed how actions and controllers are discovered within your application. This article will focus on how a specific action is selected, given a set of action descriptors found on start up.

    I will also try and discuss how the brand new razor pages fits into this, and why the process of action selection is the same for both MVC and razor pages.

    The ActionSelector Class

    
    /// <summary>
    /// A default <see cref="IActionSelector"/> implementation.
    /// </summary>
    public class ActionSelector : IActionSelector
    {
        ...
    }
    
    /// <summary>
    /// Defines an interface for selecting an MVC action to invoke for the current request.
    /// </summary>
    public interface IActionSelector
    {
        IReadOnlyList<ActionDescriptor> SelectCandidates(RouteContext context);
    
        ActionDescriptor SelectBestCandidate(RouteContext context, IReadOnlyList<ActionDescriptor> candidates);
    }
    
    

    The ActionSelector class is the central piece of infrastructure for ASP.NET Core action selection.

    The ActionSelector class itself is registered with the global dependency injection system against the IActionSelector abstraction. Like all registrations in the ASP.NET Core dependency injection system, it is only added if there is no previous registration:

    
    services.TryAddSingleton<IActionSelector, ActionSelector>();
    
    

    This means that you can easily create your own implementation of IActionSelector, and provide your own logic for selecting an action descriptor:

    
    public class CustomActionSelector : IActionSelector
    {
        public IReadOnlyList<ActionDescriptor> SelectCandidates(RouteContext context)
        {
            ...
        }
    
        ActionDescriptor SelectBestCandidate(RouteContext context, IReadOnlyList<ActionDescriptor> candidates)
        {
            ...
        }
    }
    
    ... and then at startup configuration ...
    
    services.AddSingleton<IActionSelector, CustomActionSelector>();
    
    

    The IActionSelector abstraction is used by MvcAttributeRouteHandler and MvcRouteHandler. Both of these are the main registered IRouter implementations.

    When selecting an action to run, the action selector does not care whether it is a controller or razor page action, all it has is metadata described by an action descriptor. If you would like to learn more about the creation of action descriptors, see my previous article about how the action descriptor collection is populated.

    Dependencies

    
    public ActionSelector(
        IActionDescriptorCollectionProvider actionDescriptorCollectionProvider,
        ActionConstraintCache actionConstraintCache,
        ILoggerFactory loggerFactory)
    {
        ...
    }
    
    

    The ActionSelector Class has multiple dependencies, and IActionDescriptorCollectionProvider is no doubt the most important. This dependency provides all the action descriptors that are currently registered within the system.

    The action constraint cache is used to retrieve the constraint(s) on a specific action-e.g., HttpPost.

    The Route Value Cache

    A route value is the value given to a specific route key for a given action. For example, an action by default has the following route key/value pairs:

    
    controller -> Home
    action -> Index
    
    

    Where home and index are the controller and action names, respectively.

    The ActionSelector class will take all the action descriptors provided by the action descriptor collection provider. With these, it creates it's own internal cache of mapping route keys to route values.

    The action selector class does this by looping through all the actions given to it, and extracting the RouteValues for each action:

    
    // This is a conventionally routed action - so we need to extract the route values associated
    // with this action (in order) so we can store them in our dictionaries.
    var routeValues = new string[RouteKeys.Length];
    for (var j = 0; j < RouteKeys.Length; j++)
    {
        action.RouteValues.TryGetValue(RouteKeys[j], out routeValues[j]);
    }
    
    

    For each and every action descriptor, there will be a route value for every route key defined within the system. If the route key was not specified for a particular action, this will be filled in as null for that particular action.

    For example, an action may have an Area attribute, which specifies that for the RouteKey "area", use the value Blog:

    
    [Area("Blog")]
    public void Index()
    {
        return View();
    }
    
    

    Due to the declaration of this route value, the system will assign null to every other action descriptor for the route key "area".

    Assigning a null value to route keys is necessary for scenarios such as Razor Pages. Razor pages add a "page" route value. If the current context has an "action" route value, then it should match when the action has "page" set. This can be seen explained here in code comments.

    Going back to the caching mechanism, The cache keys are used as an array of route values for an action, and the actual entries will be a list of action descriptors that matched those values. For example, given the action:

    
    public class HomeController : Controller
    {
        public void Index()
        {
            return View();
        }
    }
    
    

    The route keys and value for this action will be:

    
    controller -> Home
    action -> Index
    
    

    The cache will store as it's key/value pair the following:

    
    Key: new string[] { "Home", "Index" },
    Value: new List<ActionDescriptor> { action }
    
    

    Conventional Routing

    The ActionSelector class provides a way to retrieve actions based on conventional routing. The SelectCandidates method's signature of the ActionSelector class is the following:

    
    IReadOnlyList<ActionDescriptor> SelectCandidates(RouteContext context);
    
    

    This method retrieves a read only list of action descriptors. Each one of these action descriptors describes a particular action within the current application based on conventional routing.

    Using the provided route data from the passed in RouteContext, the action selector will loop through all the registered route keys that is given from the cache-i.e. "action", "controller", "area" etc.:

    
    var keys = cache.RouteKeys;
    var values = new string[keys.Length];
    for (var i = 0; i < keys.Length; i++)
    {
        context.RouteData.Values.TryGetValue(keys[i], out object value);
    
        if (value != null)
        {
            values[i] = value as string ?? Convert.ToString(value);
        }
    }
    
    

    and try and retrieve the actual value matching this route key, from the RouteData given in the RouteContext.

    Each loop will try and receive the value associated with the key. If the value is not null, then it assigns it to the values array. This values array is then used as the key for the entries calculated in the cache:

    
    if (cache.OrdinalEntries.TryGetValue(values, out var matchingRouteValues) ||
        cache.OrdinalIgnoreCaseEntries.TryGetValue(values, out matchingRouteValues))
    {
        return matchingRouteValues;
    }
    
    

    These ordinal entries are built up from the actual route values defined from the conventional routing within your application.

    So from this, the matching route values have been found, and so we are now ready to actually select an action from the selected candidates.

    A Note on Cache Efficiency

    The cache that the action selector uses is done in a very clever way. As previously discussed, to speed up the actual selection of action candidates, it uses the actual route values as a key, as this is beneficial from the perspective of the incoming request's route data. However, the actual cache uses two dictionaries, not just one:

    
    // We need to build two maps for all of the route values.
    OrdinalEntries = new Dictionary>(StringArrayComparer.Ordinal);
    OrdinalIgnoreCaseEntries = new Dictionary>(StringArrayComparer.OrdinalIgnoreCase);
    
    

    One dictionary is for a case-sensitive key/value lookup, and the other is case-inensitive. Why have two?

    It comes down to speed. When using dictionaries, a case-sensitive key is faster than a case-insensitive key. So the code is utilising this fact.

    However, action selection is actually case-insensitive, so two actions can actually equal when one is "Index" and another is "index" for a particular route value.

    So essentially, if a match is not found with a case-sensitive match first, a case-insensitive match is then used, as shown earlier.

    Attribute Routing

    The MvcAttributeRouteHandler is the handler for attribute routing. This does not use the "SelectCandidates" method to retrieve action descriptors. Attribute routing uses the AttributeRoute class to retrieve applicable action descriptors.

    The AttributeRoute class, like the ActionSelector class, uses the IActionDescriptorCollectionProvider to retrieve all the action descriptors in the system:

    
    var attributeRoutedActions = actions.Where(a => a.AttributeRouteInfo?.Template != null);
    
    

    The AttributeRoute class is only concerned with actions that have attribute route information on them. Contrast this with the ActionSelector class, that is only concerned with actions that don't have attribute routing-i.e., conventional routing.

    Attribute Routing with Route Templates

    A route template is either a built-in or custom part of a URL path. For example, you may have a URL path similar to what my blog site uses:

    
    /2017/09/03/my-custom-blog-post
    
    

    How does this get translated into selecting a specific action?

    
    public class BlogController : Controller
    {
        [Route("{year:int}/{month:int}/{day:int}/{title}", Name = "BlogPostDetails")]
        public IActionResult Get(int year, int month, int day, string title)
        {
            ...
        }
    }
    
    

    In the above route attribute, we have defined a route template. This route template is made up of four different route parameters: year, month, day, and title. So to match this action, the URL path needs to have each of these route parameters provided.

    I also say that the year, month, and day all have to be integers:

    
    {year:int}/{month:int}/{day:int}
    
    

    The section after the colon (:) signifies an inline constraint. If the part before the first path separator (/) is not an integer, then this action will not match the URL, and so on for all the route parameters.

    You can learn more about route templates in the Microsoft documentation.

    Selecting the Best Candidate

    
    public ActionDescriptor SelectBestCandidate(RouteContext context, IReadOnlyList candidates)
    {
        ...
    }
    
    

    The SelectBestCandidate method is called by both the MvcAttributeRouteHandler class, and the MvcRouteHandler class. They both handle matching an action to the HTTP request through attribute routing and conventional routing, respectively.

    Both handlers are used by the routing system in order to handle incoming HTTP requests, and both use the ActionSelector class to achieve those goals.

    The SelectBestCandidate method accepts a read only list of action descriptors. These are either sourced from the action selector itself, or through the attribute routing system.

    The following code is the core of this method:

    
    var matches = EvaluateActionConstraints(context, candidates);
    
    var finalMatches = SelectBestActions(matches);
    
    

    Evaluating the Constraints

    The SelectBestCandidate method is the point in time when the action constraints are evaluated. After the selection of the best candidates, each action is evaluated based on it's action constraints.

    Action constraints can be used to further narrow the selection of an action, when there are currently multiple candidates that match against the given route values.

    You can define your own action constraints by implementing the IActionConstraint interface:

    
        public interface IActionConstraint : IActionConstraintMetadata
        {
            int Order { get; }
    
            bool Accept(ActionConstraintContext context);
        }
    
    

    There are two pieces of functionality to implement. One is the Order property, and the other is the Accept method.

    The order property defines what stage your action constraint will be evaluated. Action constraints that are of a lower order will be evaluated first.

    N.B., if there are two actions that match a request, and one has an action constraint, then it is deemed a more specific match, and is selected over the action with no action constraint. This is explained on the actual IActionConstraint interface in the source code.

    Once the constraints are evaluated, then the results are checked, and if there is more than one matching action, then the infamous AmbiguousActionException is thrown.

    Summary

    Action selection knows nothing about your controllers. It knows nothing about your application. All it deals with is a list of action descriptors. Action descriptors that have already been evaluated at start up.

    This post went through how an action descriptor is selected based on an incoming request, and discussed the extension points of the selection process. It also discussed how attribute routing, and conventional routing, share common code, and how they differ in the sourcing of these global action descriptors.

    Thank you for reading and I hope this helps. Please share with a friend.

  • The Wonderful World of Webpack

    Monday, 04 September 2017

    Webpack is a JavaScript module bundler, or so the blurb goes. This is an apt name for it. However, what I would like to do in this article, is to expand on the true power of Webpack.

    This article will not explain how to use Webpack. Rather, explain the reasoning behind it, and what makes it more special than just a bundler.

    Webpack is still a Bundler

    One of the main reasons for tools like Webpack is to solve the dependency problem. The dependency problem that is caused by modules within JavaScript; specifically Node.js.

    Node.js allows you to modularise code. Modularisation of code causes an issue with dependencies. Cyclic dependencies can occur-e.g., A -> B -> A referencing. What tools like Webpack can do, is build an entire dependency graph of all of your referenced modules. With this graph, analysis can occur to help you alleviate the stress of such a dependency graph.

    Webpack can take multiple entry points into your code, and spit out an output that has bundled your dependency graph into one or more files.

    Webpack is so much more

    For me, what makes webpack so special is the great extension points it provides.

    Loaders

    Loaders are what I like to refer to as mini-transpilers. They take a file of any kind - e.g., TypeScript, CoffeeScript, JSON, etc. - and produce JavaScript code for later addition to the dependency graph Webpack is building.

    The power of loaders is that they are not in short supply. Loaders are an extension point. You can create your own loader, and there are 100's of default and 3rd party loaders out there.

    For example, could there be a point where we would ever want to take a statically typed language like C#, and transpile this into JavaScript for Webpack to understand?

    The limits are boundless with loaders. Loaders can be chained, configured, filtered out based on file type, and more.

    Custom Loader Example

    As the webpack documentation explains, a loader is just a node module exporting a function. A loader is as simple as a defined node module that exports a function:

    
        module.exports = function(src) {
            return src + '\n'
                + 'window.onload = function() { \n'
                + ' console.log("This is from the loader!"); \n'
                + '}';
        };
    
    

    This is a trivial example of what a loader is. All this loader is doing is appending a function to write to the console on window load for the current browser session.

    With this idea in mind, it becomes apparant that we now have the power to take any source input, and interpret it in anyway we want. So coming back to our previous example, we could take C# as the input, and create a parser that transpiles it into native JavaScript that Webpack expects.

    A C# to JavaScript transpiler is a bit far-fetched, and in all honesty slightly pointless, but I hope you appreciate how we can leverage loaders in Webpack to make it more than a bundler.

    Plugins

    Plugins allow the customisation of Webpack on a more broader scope than file by file like loaders. Plugins are where you can add extra functionality to the core of Webpack. For example, you can add a plugin for minification; Extract certain text from output such as CSS; Use plugins for compression, and so on.

    Plugins work by having access to the Webpack compiler itself. They have access to all the compilation steps that can and may occur, and can modify those steps. This means a plugin can modify what files get produced, what files to add as assets, ans so on.

    A small example of a plugin is the following:

    
    file: './my-custom-plugin.js'
    
    function MyCustomPlugin() {}
    
    MyCustomPlugin.prototype.apply = function(compiler) {
        compiler.plugin('emit', displayCurrentDate);
        compiler.plugin('after-emit', displayCurrentDate)
    }
    
    function displayCurrentDate(compilation, callback) {
        console.log(Date());
    
        callback();
    }
    
    module.exports = MyCustomPlugin;
    
    

    In this example, we are adding two event handlers to two separate event hooks in the Webpack compiler. The outcome of this is one date that is printed to console just before the assets are emitted to the output directory, and one date after the assets have been emitted.

    This plugin can be used in the main Webpack configuration:

    
    var MyCustomPlugin = require('my-custom-plugin');
    
    var webpackConfig = {
        ...
        plugins: [
            new MyCustomPlugin()
        ]
    }
    
    

    This plugin will now run on the emit and after-emit stages of the compilation process. A good list of compiler event hooks are available on the Webpack website.

    The importance of plugins, once again, is that they are an extension point. The way Webpack has been designed is to allow the user to fully extend the core of Webpack. There are many plugins to choose from, and a lot are 3rd party.

    With this in mind, a plugin could take all your assets that you require, and compress them with an algorithm. In fact, there is already a plugin for this very thing.

    Summary

    Webpack is a module bundler, that is what the label says. It takes your dependency graph, and outputs a browser readable format.

    However, webpack can be so much more.

    What if we could take C# code, and transpile it into JavaScript? What if we could take a YAML configuration file, and create a working program just out of configuration? What if we took an image, and automatically made it cropped and greyscaled?

    I think if you start thinking of Webpack as more of a transpiler, not just a bundler, the true power of Webpack can be seen.

    Thanks for reading and hope this helps.

  • ASP.NET Core 2.0 - Repository Overview: Action Discovery

    Saturday, 02 September 2017

    Previously in this Series

    1. ASP.NET Core MVC - Repository Overview: Model Binding
    2. ASP.NET MVC Core - Repository Overview: Value Providers

    Contents

    Introduction

    This article is the third in a series I'm dedicating to reviewing the code and design of the ASP.NET Core GitHub repository. The series tries to explain the underlying mechanisms of ASP.NET Core, and how all the code fits together to create the framework as we know it at the published date.

    ASP.NET Core action discovery is the bread and butter of the framework. The most useful idea that has come out of the framework is the ability to find sections of code based on metadata of the current HTTP request-i.e., actions.

    Actions are the main point within ASP.NET Core that a developer can interact with the framework itself. In this article, I am going to discuss how the framework finds your specific action and how the brand new razor pages fits into the picture.

    N.B. This article will not talk about how razor pages work within the 2.0 framework version. I will dedicate a razor pages topic to a later post.

    Registering Actions

    Actions are provided to the action selection system through the IActionDescriptorCollectionProvider abstraction. The collection provider has a property called ActionDescriptors, which is the main property that provides all action descriptors given to the system. This collection provider is registered with the dependency injection system, and so can be used anywhere in code. This is useful if you want to work with all or a subset of the metadata found in the action descriptors of your application.

    The collection provider itself will has an enumeration of IActionDescriptorProvider implementations.

    An action descriptor provider is something that creates metadata about the actions of your application. These providers are aggregated to build up the collection that the ActionDescriptors property provides:

    
    for (var i = 0; i < _actionDescriptorProviders.Length; i++)
    {
        _actionDescriptorProviders[i].OnProvidersExecuting(context);
    }
    
    

    The two main action descriptor provider instances are the ControllerActionDescriptorProvider, and the PageActionDescriptorProvider. One is the classic MVC action provider and one is the new Razor Pages action descriptor provider, respectively.

    What this means is that the ASP.NET team have hooked up the new Razor Pages introduced in ASP.NET Core 2.0, into the bog standard action selection sub-system. This allows you to use the classic MVC actions alongside the new Razor Pages.

    Finding Controller Actions

    The ControllerActionDescriptorProvider is the main class that will explore your current application for applicable controller actions. The provider will try and retrieve all the action descriptors it can find, adding each one to the main ActionDescriptorProviderContext context, passed down from the collection provider:

    
    foreach (var descriptor in GetDescriptors())
    {
        context.Results.Add(descriptor);
    }
    
    

    The provider will call build on the ControllerActionDescriptorBuilder class, passing the application model:

    
    protected internal IEnumerable GetDescriptors()
    {
        var applicationModel = BuildModel();
        ApplicationModelConventions.ApplyConventions(applicationModel, _conventions);
        return ControllerActionDescriptorBuilder.Build(applicationModel);
    }
    
    

    The BuildModel method is where most of the work is done to retrieve all the different controller action descriptors. Firstly, the method will retrieve all the controllers through the main ApplicationPartManager class. The application part manager can be used to add application parts to your runtime. For example, if you wanted to register controllers within another assembly, as to be used for retrieval of action descriptors, you can do the following:

    
    var assembly = typeof(ControllerInAssembly).GetTypeInfo().Assembly;
    var part = new AssemblyPart(assembly);
    services
        .AddMvc()
        .ConfigureApplicationPartManager(apm => p.ApplicationParts.Add(part));
    
    

    This is also explained in my previous post about sharing controllers in assemblies.

    The ApplicationPartManager class also contains a bunch of feature providers. A feature provider is a class that implement IApplicationFeatureProvider. The ones that exist within the core framework are the:

    As can be seen from the names of these providers, they all have a specific feature in mind when building the application model.

    The Controller Feature Provider

    The ControllerFeatureProvider class is the feature we are interested in, as this is the main feature that contains all the controller actions we need to discover.

    The provider will populate a ControllerFeature class. This class will accept all the types given by implementations of the IApplicationPartTypeProvider class.

    The AssemblyPart is the classic example that implements IApplicationPartTypeProvider. This means that the ControllerFeatureProvider can use it to provide Controller types within an assembly:

    
    foreach (var type in part.Types)
    {
        if (IsController(type) && !feature.Controllers.Contains(type))
        {
            feature.Controllers.Add(type);
        }
    }
    
    

    Once the ControllerFeature is filled with the registered Controllers, then this is used by the ControllerActionDescriptorProvider to build up an eventual ApplicationModel class through the use of IApplicationModelProvider instances. The application model essentially provides all the metadata associated with the controllers and filters.

    The list of IApplicationModelProvider instances are the:

    Each instance is registered with the dependency injection system, and so can be used anywhere.

    The Default Application Model Provider

    The DefaultApplicationModelProvider instance is the one we are interested in for action discovery. This provider populates the controller models within the main ApplicationModel class.

    With the use of reflection, it will enumerate all the methods on the controller types given to it by the controller feature provider. Methods within the controller are filtered out based on the metadata of the method. The logic for determining this can be found here. The main thing to note is that you can use the NonActionAttribute to stop methods being registered and being added to the controller model being built.

    Conventions are then applied on the application model, allowing you to modify the application model based on the conventions you give.

    Customising Controller Action Discovery

    With the above explained, an easy way to extend the way actions are discovered in traditional MVC is by providing an instance of the IApplicationFeatureProvider<ControllerFeature> class:

    
    public class MyControllerFeatureProvider : IApplicationFeatureProvider<ControllerFeature>
    {
        private readonly Type _myCustomControllerType;
    
        public MyControllerFeatureProvider(Type myCustomControllerType)
        {
            _myCustomControllerType = myCustomControllerType;
        }
    
        public void PopulateFeature(IEnumerable<ApplicationPart> parts, ControllerFeature feature)
        {
            feature.Controllers.Add(_myCustomControllerType);
        }
    }
    
    

    This feature provider can then be registered with the application part manager at startup:

    
    services.AddMvc()
        .ConfigureApplicationPartManager(p =>
            p.FeatureProviders.Add(new MyControllerFeatureProvider(typeof(MyCustomerController))));
    
    

    Now the feature provider is registered, it can then be used by the default application model provider to discover the actions that are present on the controller.

    Finding Razor Pages

    The brand new Razor Pages are built in such a way as to fit into the existing framework. Razor Pages resolve to actions just like controller actions do. To retrieve all the razor page action descriptors for a particular application, it starts with the PageActionDescriptorProvider.

    The PageActionDescriptorProvider implements IActionDescriptorProvider, which is subsequently registered with dependency injection, and thus hooked up to the main IActionDescriptorCollectionProvider (N.B. the collection provider is the main source of action descriptors for your application as previously discussed). The collection provider class will take all registered implementations of the IActionDescriptorProvider, and build up a complete list of action descriptors. This is how Controllers and Razor Pages can co-exist.

    In a similar way the ControllerActionDescriptorProvider loops through IApplicationModelProvider instances, the PageActionDescriptorProvider class will look through all available IPageRouteModelProviders implementations.

    The main implementations of IPageRouteModelProviders are the:

    Both these instances are registered with the dependency injection system.

    The Razor Project PageRouteModel Provider

    The RazorProjectPageRouteModelProvider will enumerate all the items under the razor pages root directory options (N.B. this option can be setup at startup configuration through the RazorPageOptions class):

    
    services.Configure(
        options => options.RootDirectory = "/CustomFolder")
    
    

    By default, Razor Pages will be found under the "/Pages" subdirectory of your application. The above snippet shows how you can customise this. The above snippet can also be achieved by using the WithRazorPagesRoot IMvcBuilder extension method.

    If you supply just a "/", this is the same as saying that the Razor Pages should be rooted at the content root. This can also be achieved through the WithRazorPagesAtContentRoot IMvcBuilder extension method.

    The RazorProjectPageRouteModelProvider will select files within the root directory that match the correct predicates. Files starting with "_" will be ignored-e.g., _Layout.cshtml. It will also ignore any .cshtml files that do not have the @page directive at the top of the file.

    If the file found in the root directory passes all of these inspections, then a PageRouteModel is created for that page, and added to current PageRouteModelProviderContext RouteModels property. The PageRouteModel is essentially metadata that describes the routing for a razor page.

    The Compiled PageRouteModel Provider

    The CompiledPageRouteModelProvider will fetch any compiled PageRouteModel classes. This class will also cache the page route models it has. This provider has a lower Order property than RazorProjectPageRouteModelProvider. This means PageActionDescriptorProvider will pass it the current context first.

    The RazorProjectPageRouteModelProvider will not add any more PageRouteModel classes to the context, if it already exists in the current context. With this in mind, you could add a brand new file to the root directory of the razor pages area, and this should be dynamically found by the provider.

    Razor Pages Route Template

    For each of your razor pages you can define a route template as part of the @Page directive:

    
    @page "{handler?}"
    
    

    Handlers are discussed at length in the microsoft documentation, and it is beyond the scope of this article to discuss them. However, the route template provided above allows specific route parameters such as the handler parameter, into the URL path.

    These route templates are added to the PageActionDescriptor as part of the attribute route information, just like the traditional attribute routing in classic MVC.

    Route templates is just another example of how the ASP.NET team have based Razor Pages on top of the existing ecosystem.

    Summary

    Throughout this article, I have discussed how ASP.NET Core 2.0 discovers actions within your application. I have discussed how controller actions are discovered, as well as how the new razor pages are discovered.

    I have also pointed out multiple times that razor pages is just an extension of how actions are currently working within the MVC world. With this in mind, you can use both MVC and Razor Pages at the same time, and both will be placed within the same routing model.

    Thanks for reading and I hope this helps you on understanding more about ASP.NET Core.


© 2017 - Jack Histon - Blog