Rob E’s Mini-MVVM Framework @ MIX10

[20/03/10]

Rob Eisenberg’s MIX 2010 talk, “Build Your Own MVVM Framework” was terrific. If you feel modestly comfortable with MVVM, run over and get the video and the code.

Rob may be best known for his Caliburn WPF/Silverlight Presentation Development Framework. That’s a hefty body of work and if the word “framework” sends an unpleasant shiver down your spine … relax yourself.

The “framework” demonstrated at MIX is roughly 500 lines (says Rob … I haven’t checked yet <grin/>). It’s based on Caliburn but stripped to essentials that Rob covered in an easy-to-follow, leisurely, one hour code-walk.

Highlights:

  • Simple MVVM
    • "VM first" in the sense that VM is in the driver's seat.
    • No impediment to "View first" in the sense of view-design drives VM-design.
  • Simple naming conventions eliminate tedious code and XAML
  • Configuration at-the-ready when conventions fail
  • No code-behind … and didn’t miss it
  • No behaviors … and didn’t miss them (not that they’d be bad)
  • No XAML data binding; debuggable bindings created at runtime
  • No drag-and-drop binding … and didn’t miss it
  • No ICommand implementations and no event handlers
  • No files over 150 lines (as I remember)
  • Cool co-routines for programming a sequence of sync and async tasks; no call backs in the ViewModel
  • Screen Conductor pattern in play

All that in one hour.

The “co-routine” trick alone is worth your time. You almost get F# “bang” syntax in C#.

It could get more complicated in your app … and you’d have Caliburn. But it might not  … and you’d be living large with dead-simple, DRY code.

One of the best sessions ever.

If only we could teach Rob to emote. The guy is passionate on the subject but you might miss it behind that mono-tone voice of his. A Jim Carrey he is not. You want entertainment? Look elsewhere. You want substance … tune in.

He got a big ovation, by-the-by, so it ain’t just me who liked it.

Rob E’s Mini-MVVM Framework @ MIX10


Archives


A simple WCF service with username password authentication: the things they don’t tell you

[22/03/10]

The WCF framework is gigantic. It has such an enormous amount of possibilities that it’s pretty easy to get completely lost. For our scenario we needed just a small basic subset. Our application provides a set of services which are consumed by a diversity of clients which have to tell who they are by providing a custom  username and password. There are loads and loads of documents and manuals to be found on the web but I didn’t find anything which gave me the complete story. Most of them follow all kinds of sidesteps in other parts of the rich WCF framework. Things we didn’t need at all, the things we needed to get our stuff to work were either omitted or only mentioned briefly in a comment.

This post tries to describe the full story. It will try to keep quiet on all noise on other cool, but unneeded, features. This information is assembled from a rich variety of stuff on the web and error messages provided by the WCF framework itself. The latter are often quite to the point and provide a lot of essential information. This post is just a kind of cookbook recipe, I don’t claim to understand every detail and would appreciate any comment to further clarify the details.

The service

The service is an ASP.NET service, hosted by IIS and configured in the system.ServiceModel part of the web.config.

  <system.serviceModel>

    <services>

      <service behaviorConfiguration="FarmService.CustomerDeskOperationsBehavior" name="FarmService.CustomerDeskOperations">

        <endpoint address="" binding="wsHttpBinding" bindingConfiguration="RequestUserName" contract="FarmService.ICustomerDeskOperations">

        </endpoint>

        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>

      </service>

    </services>

The endpoint address is the root of the IIS site in which it his hosted. To use username authentication you need to use wsHttpBinding. The services functionality is described in the ICustomerDeskOperations contract.

In the binding you specify the credential type as username.

<bindings>

  <wsHttpBinding>

    <binding name="RequestUserName" >

      <security mode="Message">

        <message clientCredentialType="UserName"/>

      </security>

    </binding>

In the servicebehaviour you set up how the username is going to be validated

<behaviors>

  <serviceBehaviors>

    <behavior name="FarmService.CustomerDeskOperationsBehavior">

      <serviceMetadata httpGetEnabled="true"/>

      <serviceCredentials>

        <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="FarmService.Authentication.DistributorValidator, FarmService"/>

        <serviceCertificate findValue="Farm" storeLocation="LocalMachine" storeName="TrustedPeople" x509FindType="FindBySubjectName"/>

      </serviceCredentials>

    </behavior>

The username is custom validated. This is done by the FarmService.Authentication.DistributorValidator class in the FarmService assembly. This class inherits from WCF class UserNamePasswordValidator and overrides the Validate method.

public class DistributorValidator : UserNamePasswordValidator

{

    public override void Validate(string userName, string password)

    {

        if (string.IsNullOrEmpty(userName) || string.IsNullOrEmpty(password))

            throw new SecurityTokenException("Username and password required");

        var repository = new DistributorRepository();

        if (! repository.IsKnownDistributor(userName, password))

            throw new FaultException(string.Format("Wrong username ({0}) or password ", userName));

    }

}

The method validates the incoming username and password in a repository and throws appropriate exceptions when needed. This is really custom code. As long as you don’t throw an exception the service invocation will be accepted.

So far this could have been a copy of many a story on the web. Except for one detail which is absolutely essential. For username password authentication to work your server hosting the service needs an X509 certificate. Else all service invocations will fail. This certificate is specified in the service behavior.

<serviceCertificate findValue="Farm" storeLocation="LocalMachine" storeName="TrustedPeople"  x509FindType="FindBySubjectName"/>

First you need a certificate. Instead of buying one (which is bound to a specific server address and thereby as good as useless for testing purposes) you can create your own. The .net framework comes with tools to generate these and there are several tutorials how to use these tools. Far more easier is selfcert a pluralsight tool which takes care of the whole process in a couple of clicks.

What they don’t tell you here is that you have to run the tool as administrator, else it will crash most ungracefully. What the tool is also unclear about is where to store the generated certificate. By default it is stored in MyStore. When validating the certificate it’s trustworthiness depends on the location it is stored. When the store is not trusted a chain of validation is started. Instead of setting up a chain of certificates you can also directly store your certificate in a trusted store.

SelfCert

With these settings the certificate is stored in a trusted location. The name and location match the settings in the service behavior

Troubles don’t end here. After a while, like logging in the next time, the service host will start complaining it cannot find the private key of the certificate with a “Keyset does not exist” error message. What happens it that the service no longer has the access right to read the certificate. What helped me was explicitly setting rights on the certificate’s private key file.

SelfCert2

 

Here I am using a blunt axe by just allowing everybody read rights on the certificate’s private key file. I’m no security expert but I am aware this is absolutely not the way to do things. But hey, I only want to build a service, never asked for this certificate stuff and the only thing I want to do here is get that out of the way in the development process.

Now the service is ready to be consumed by a client

The client

To consume this service add a service reference in the client. The mexHttpBinding in the service configuration enables to read all metadata form the service without any credentials.

Setting up a connection to the client requires some fiddling. Again not all of these settings are clear by default.

var endPoint = new EndpointAddress(new Uri(Farm.FarmUrl), EndpointIdentity.CreateDnsIdentity("Farm"));

var binding = new WSHttpBinding();

binding.Security.Mode = SecurityMode.Message;

binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;

 

var result = new CustomerDeskOperationsClient(binding, endPoint);

result.ClientCredentials.UserName.UserName = Farm.FarmUserName;

result.ClientCredentials.UserName.Password = Farm.FarmPassword;

First we need an endpoint. This is assembled from the url in the client’s configuration, here a constant Farm.FarmUrl. For the custom username authentication to work the endpoint also needs an EndpointIndentity. According to the sparse msdn documentation this is to prevent phishing. The fact that the identity was needed and the parameter had to be the certificate’s name was suggested by the WCF error messages.

The security is set according to the security settings we have seen in the service. Both the username and password are set in UserName property of the ClientCredentails.

Wrapping up

This is it. Now our service and clients are talking. But it took far to much effort to find the right settings. The number is not great, but they all were found to be essential. Finding the right was a process of endlessly weeding out all sidesteps. I hope this well help you to get it done a little faster.

Pow! Biff! Wham! Splat!

[22/03/10]

BatmanNo, this post is not a tribute to the fabulously kitschy Batman TV series (1966-1968) starring Adam West and Burt Ward. Or a tribute to the onomatopoeic sounds for which it and the Batman comics were famous. This show did however come to mind when I was trying to solve a PowerShell problem and ran across the wonderfully-named splatting (@) operator introduced in PowerShell v2. Before we get to the splatting operator, let’s look at the problem that it was designed to solve.

With psake v2 came the change from a PowerShell script to a PowerShell module. Modules provide a lot of advantages over a simple script. For psake the compelling advantages were better control over scoping and better integration with PowerShell’s help system. One disadvantage was that you now had to first import the module before you could use psake.

image

ASIDE: If you’re wondering about the “James@EDDINGS psake [master +0 ~1 -0]>” stuff, I’ve installed Mark Embling’s awesome PowerShell Git Prompt, which is simply a custom PowerShell prompt. It tells me that my user is James, I’m logged into my main dev machine (EDDINGS), I’m in the psake directory (c:\dev\oss\psake) – though I only display the last part of the path for brevity, I’m on the “master” branch, I have no pending additions (+0), no pending changes (~0), and no pending deletions (-0). (I need to see if I can hack in how many commits forward or back I am from a tracked remote.) Everything in brackets is omitted if it isn’t a Git directory. Another good set of Git/PowerShell scripts is Jeremy Skinner’s PowerShell Git Tab Expansion for completing common command names, branch names, and remote names. If you are using Git and PowerShell, I would highly recommend both Mark’s and Jeremy’s scripts. If you don’t want to copy/paste them together, you can grab them from my random collection of PowerShell scripts here.

Note how we had to first call “import-module” before we could use psake. For some people, they install the latest version of psake in some well-known location, import the module, and then run it from there until the next update comes out. For others (e.g. me), we like to version psake along with our source code and other dependencies. Importing a project-specific copy of psake becomes a headache very quickly. So I wrote a little shim script to register psake, run it, and then unregister it.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake $args
remove-module psake

 

Seems reasonable enough. We simply pass along the script arguments ($args) to the invoke-psake command and everything should be fine.

image

OK. What happened? PowerShell did what we told it to. It called the function, invoke-psake, with an array as its first parameter rather than using the array as the list of parameters as we intended. Let’s fix that.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake $args[0] $args[1]
remove-module psake

 

One little problem.

image

Note that we left out the task (“clean” previously) so that psake would use the default. Rather than using the default, invoke-psake has been passed a null argument for the task. We could fix this by detecting null arguments in invoke-psake and explicitly specifying the defaults. It’s ugly because we couldn’t use PowerShell’s syntax for specifying defaults, but it would work. Another problem is that we would need to add as many $argsNo as we expected to receive arguments. A messy solution all around.

Fortunately PowerShell v2 has an elegant solution to this problem called the splatting operator, denoted by @. The splatting operator binds an array to the argument list of a function.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake @args
remove-module psake

 

Note the subtle change. Rather than using $args we use @args.

image

Success! And it’s not just for passing arguments from one script to another. You can create arr

 image

Note the call to “Add $addends” where PowerShell called the Add function once for every item in the array. Not what we intended. “Add @addends” using the splatting operator gave us the expected result. You can even use a hashtable to splat named parameters.

image

Note that the answer was 1 (e.g. 11 % 10) and not 10 (e.g. 10 % 11). The splatting operator properly bound the value 11 to the x parameter and 10 to the y parameter, just as it was in the hashtable.

The splatting operator provides us with a tremendous amount of flexibility in manipulating function and script arguments. It’s a useful tool to add to your PowerShell arsenal. Go forth and SPLAT!

New Scrum Cartoon Coming Out This Week!

[22/03/10]

Hi,

Well I am still in India and I hear your feedback that you want more cartoons.

So… withouth further ado this week, you will see a brand new one.

It focuses on a really cool Bear, and how you can do estimation and planning.

That’s it for the teaser…. Coming soon!

- mike vizdos
www.michaelvizdos.com
www.implementingscrum.com

[21/03/10]  - Introduction to the Reactive Extensions for JavaScript – jQuery Live Event Integration

[21/03/10]  - On-Site Certified ScrumMaster Course on eBay

[20/03/10]  - IIS 7 URL Rewriter for SEO Friendly URL’s

[20/03/10]  - Rob E’s Mini-MVVM Framework @ MIX10

[20/03/10]  - TDD: Expressive test names

[19/03/10]  - MVVM, Josh Smith’s Way

[19/03/10]  - DevTeach Toronto 2010 Wrap-Up

[19/03/10]  - ASP.NET Performance Framework

[19/03/10]  - Introduction to the Reactive Extensions for JavaScript – Drag and Drop

[18/03/10]  - Essential and accidental complexity

[18/03/10]  - Organizational Barriers and Impediments to Big Scrum Implementations

[18/03/10]  - Architecture and Design Evolution

[18/03/10]  - Positive Psychology and Team Performance

[18/03/10]  - Orlando Scrum Gathering blog/link below

[18/03/10]  - Your chance to heckle me on Ignite your Coding tomorrow

[17/03/10]  - The Reactive Extensions for JavaScript Released

[17/03/10]  - What is Scrum?