The Reactive Extensions for JavaScript Released

[17/03/10]

The long awaited day has come as the Reactive Extensions for JavaScript have been released on DevLabs in conjunction with the talk given by Erik Meijer at Mix 2010.  Jeff Van Gogh, one of the principal developers on this project has more details and a detailed look at the sample application of “Time Flies Like an Arrow”.  I’d like to also give a detailed explanation of another sample application, the Bing Maps and Twitter mashup.

Before we get started, let’s get caught up to where we are today:

Mashing Bing Maps and Twitter

One of the demos I’ve created is a mashup of Bing Maps and Twitter so that if the tweet has geolocation information attached that I’d display it on a map in near real time.  What that means is that at a specified interval, we query the Twitter search API for something such as FourSquare, which has a higher probability of having some geolocation information attached and then display those results.  So, how could we do this?  Let’s walk through the example step by step.

First, we need to focus on the Bing Maps AJAX API which gives us the two main features we’re looking for, displaying a map and putting pushpins on it with some detailed information.  In order to make use of it, we’ll need to link the Bing Maps AJAX library such as the following:

<script type="text/javascript" 
  src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2">
</script>

Next, we’ll need a place to put our map, so, let’s create a simple <div> to host it.

<body>
    <div id="veMap" 
         style="position:relative; width: 1000px; height: 600px;"></div>
</body>

And then we’ll need our global map object that we can manipulate.

<script type="text/javascript">

    var map = null;
    
    // More code goes here

Once we’ve defined that, we need a way to show the pushpin on the page.  For this I’ll use the tweet ID for the overall pushpin ID, and I’ll take the date, latitude, longitude, their user icon URL, the user name and the text.  From there, I’ll create a pushpin in that exact spot with the associated data.  I have a try/catch block here where if I’ve seen that tweet before, the Bing Maps API doesn’t allow me to have multiple pushpins with the same ID, so I just ignore the failure.

function addPushPin(
    id, 
    date, 
    latitude, 
    longitude, 
    imageUrl, 
    text, 
    details) {

    try {
        var pinPoint = new VELatLong(
            latitude,
            longitude, 
            0, 
            VEAltitudeMode.RelativeToGround);
        var detailText = date + "-" + details
        var pin = new VEPushpin(id, pinPoint, imageUrl, text, detailText);
        map.AddPushpin(pin);
    } catch(err) {
        // Seen it, don't worry
    }
}

Moving onto the Twitter aspect of this, we’ll need a way to query Twitter using the Search API.  In this case, we’ll once again use the XmlHttpRequest method with our URL which contains our text and a total request of 100 records per page.

function searchTwitter(text) {
    var url = 
        "http://search.twitter.com/search.json?rpp=100&q=" 
            + encodeURIComponent(text);

    return Rx.Observable.XmlHttpRequest(url);
}

Once the page loads, we need to load the map, so in my document’s ready function, I initialize the VEMap with the ID of my <div>, load the map and set my zoom level to a nice globe shot.

$(document).ready(function() {

    map = new VEMap("veMap");
    map.LoadMap();
    map.SetZoomLevel(2);

Now we get to the interesting part.  How do we reload the data every so often without running into the Twitter API limit?  We can use the Interval method which we can set a due time before it invokes the action again.  In this case, our action is going to be searching for FourSquare which returns an IObservable<IObservable<JSONData>> in C# parlance which we need to flatten and take the first one, so we’ll need the Switch method to do that.

Rx.Observable
    .Interval(10000)
    .Select(function() { 
        return searchTwitter("foursquare"); })
    .Switch()

Realizing that we’re dealing with JSON data, we need to safely parse it and in this case, I’m using the JSON2 library to do that.  We’ll then take the JSON array and split it apart and turn each record into an observable with the SelectMany method.  And since we’re only interested in those with geolocation, I check if they have any with my Where method.

    .Select(function(result) {
            return JSON.parse(result.responseText); })
        .SelectMany(function(data) {
            return data.results.toObservable(); })
    .Where(function(data) {
        return data.geo != null; })

Finally, we can subscribe to the resulting observable.  Our subscribing action is going to be adding a push pin with the JSON data as well as handle any potential errors.

    .Subscribe(
        function(data) {

                var lat = data.geo.coordinates[0];
                var lon = data.geo.coordinates[1];
                
                addPushPin(
                    data.id, 
                    data.created_at, 
                    lat, 
                    lon, 
                    data.profile_image_url, 
                    data.from_user, 
                    data.text);
        },
        function(error) {
            alert(error);
        });

The code in its entirety can be found here.  And below is the result of our code where we see the icons of the Twitter users who mention FourSquare after we leave it running for just a few short minutes…

foursquare

We could take this example further to expire the pushpins over time, but I think overall, it’s a great example of asynchronous programming in JavaScript and how you can integrate it into your APIs.

Conclusion

Through the use of the Reactive Extensions for JavaScript, we’re able to mash two APIs together through AJAX and refresh them at a certain interval, keeping a near real-time feel to it.  That’s just one of the many things we can do with it that I’ll hopefully cover more in the near future.  So, download it, and give the team feedback!

What can I say?  I love JavaScript and very much looking forward to the upcoming JSConf 2010 here in Washington, DC where the Reactive Extensions for JavaScript will be shown in its full glory with Jeffrey Van Gogh (who you can now follow on Twitter).  For too many times, we’ve looked for the abstractions over the natural language of the web (HTML, CSS and JavaScript) and created monstrosities instead of embracing the web for what it is.  With libraries such as jQuery and indeed the Reactive Extensions for JavaScript gives us better tools for dealing with the troubled child that is DOM manipulation and especially events.

The Reactive Extensions for JavaScript Released


Archives


A simple WCF service with username password authentication: the things they don’t tell you

[22/03/10]

The WCF framework is gigantic. It has such an enormous amount of possibilities that it’s pretty easy to get completely lost. For our scenario we needed just a small basic subset. Our application provides a set of services which are consumed by a diversity of clients which have to tell who they are by providing a custom  username and password. There are loads and loads of documents and manuals to be found on the web but I didn’t find anything which gave me the complete story. Most of them follow all kinds of sidesteps in other parts of the rich WCF framework. Things we didn’t need at all, the things we needed to get our stuff to work were either omitted or only mentioned briefly in a comment.

This post tries to describe the full story. It will try to keep quiet on all noise on other cool, but unneeded, features. This information is assembled from a rich variety of stuff on the web and error messages provided by the WCF framework itself. The latter are often quite to the point and provide a lot of essential information. This post is just a kind of cookbook recipe, I don’t claim to understand every detail and would appreciate any comment to further clarify the details.

The service

The service is an ASP.NET service, hosted by IIS and configured in the system.ServiceModel part of the web.config.

  <system.serviceModel>

    <services>

      <service behaviorConfiguration="FarmService.CustomerDeskOperationsBehavior" name="FarmService.CustomerDeskOperations">

        <endpoint address="" binding="wsHttpBinding" bindingConfiguration="RequestUserName" contract="FarmService.ICustomerDeskOperations">

        </endpoint>

        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>

      </service>

    </services>

The endpoint address is the root of the IIS site in which it his hosted. To use username authentication you need to use wsHttpBinding. The services functionality is described in the ICustomerDeskOperations contract.

In the binding you specify the credential type as username.

<bindings>

  <wsHttpBinding>

    <binding name="RequestUserName" >

      <security mode="Message">

        <message clientCredentialType="UserName"/>

      </security>

    </binding>

In the servicebehaviour you set up how the username is going to be validated

<behaviors>

  <serviceBehaviors>

    <behavior name="FarmService.CustomerDeskOperationsBehavior">

      <serviceMetadata httpGetEnabled="true"/>

      <serviceCredentials>

        <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="FarmService.Authentication.DistributorValidator, FarmService"/>

        <serviceCertificate findValue="Farm" storeLocation="LocalMachine" storeName="TrustedPeople" x509FindType="FindBySubjectName"/>

      </serviceCredentials>

    </behavior>

The username is custom validated. This is done by the FarmService.Authentication.DistributorValidator class in the FarmService assembly. This class inherits from WCF class UserNamePasswordValidator and overrides the Validate method.

public class DistributorValidator : UserNamePasswordValidator

{

    public override void Validate(string userName, string password)

    {

        if (string.IsNullOrEmpty(userName) || string.IsNullOrEmpty(password))

            throw new SecurityTokenException("Username and password required");

        var repository = new DistributorRepository();

        if (! repository.IsKnownDistributor(userName, password))

            throw new FaultException(string.Format("Wrong username ({0}) or password ", userName));

    }

}

The method validates the incoming username and password in a repository and throws appropriate exceptions when needed. This is really custom code. As long as you don’t throw an exception the service invocation will be accepted.

So far this could have been a copy of many a story on the web. Except for one detail which is absolutely essential. For username password authentication to work your server hosting the service needs an X509 certificate. Else all service invocations will fail. This certificate is specified in the service behavior.

<serviceCertificate findValue="Farm" storeLocation="LocalMachine" storeName="TrustedPeople"  x509FindType="FindBySubjectName"/>

First you need a certificate. Instead of buying one (which is bound to a specific server address and thereby as good as useless for testing purposes) you can create your own. The .net framework comes with tools to generate these and there are several tutorials how to use these tools. Far more easier is selfcert a pluralsight tool which takes care of the whole process in a couple of clicks.

What they don’t tell you here is that you have to run the tool as administrator, else it will crash most ungracefully. What the tool is also unclear about is where to store the generated certificate. By default it is stored in MyStore. When validating the certificate it’s trustworthiness depends on the location it is stored. When the store is not trusted a chain of validation is started. Instead of setting up a chain of certificates you can also directly store your certificate in a trusted store.

SelfCert

With these settings the certificate is stored in a trusted location. The name and location match the settings in the service behavior

Troubles don’t end here. After a while, like logging in the next time, the service host will start complaining it cannot find the private key of the certificate with a “Keyset does not exist” error message. What happens it that the service no longer has the access right to read the certificate. What helped me was explicitly setting rights on the certificate’s private key file.

SelfCert2

 

Here I am using a blunt axe by just allowing everybody read rights on the certificate’s private key file. I’m no security expert but I am aware this is absolutely not the way to do things. But hey, I only want to build a service, never asked for this certificate stuff and the only thing I want to do here is get that out of the way in the development process.

Now the service is ready to be consumed by a client

The client

To consume this service add a service reference in the client. The mexHttpBinding in the service configuration enables to read all metadata form the service without any credentials.

Setting up a connection to the client requires some fiddling. Again not all of these settings are clear by default.

var endPoint = new EndpointAddress(new Uri(Farm.FarmUrl), EndpointIdentity.CreateDnsIdentity("Farm"));

var binding = new WSHttpBinding();

binding.Security.Mode = SecurityMode.Message;

binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;

 

var result = new CustomerDeskOperationsClient(binding, endPoint);

result.ClientCredentials.UserName.UserName = Farm.FarmUserName;

result.ClientCredentials.UserName.Password = Farm.FarmPassword;

First we need an endpoint. This is assembled from the url in the client’s configuration, here a constant Farm.FarmUrl. For the custom username authentication to work the endpoint also needs an EndpointIndentity. According to the sparse msdn documentation this is to prevent phishing. The fact that the identity was needed and the parameter had to be the certificate’s name was suggested by the WCF error messages.

The security is set according to the security settings we have seen in the service. Both the username and password are set in UserName property of the ClientCredentails.

Wrapping up

This is it. Now our service and clients are talking. But it took far to much effort to find the right settings. The number is not great, but they all were found to be essential. Finding the right was a process of endlessly weeding out all sidesteps. I hope this well help you to get it done a little faster.

Pow! Biff! Wham! Splat!

[22/03/10]

BatmanNo, this post is not a tribute to the fabulously kitschy Batman TV series (1966-1968) starring Adam West and Burt Ward. Or a tribute to the onomatopoeic sounds for which it and the Batman comics were famous. This show did however come to mind when I was trying to solve a PowerShell problem and ran across the wonderfully-named splatting (@) operator introduced in PowerShell v2. Before we get to the splatting operator, let’s look at the problem that it was designed to solve.

With psake v2 came the change from a PowerShell script to a PowerShell module. Modules provide a lot of advantages over a simple script. For psake the compelling advantages were better control over scoping and better integration with PowerShell’s help system. One disadvantage was that you now had to first import the module before you could use psake.

image

ASIDE: If you’re wondering about the “James@EDDINGS psake [master +0 ~1 -0]>” stuff, I’ve installed Mark Embling’s awesome PowerShell Git Prompt, which is simply a custom PowerShell prompt. It tells me that my user is James, I’m logged into my main dev machine (EDDINGS), I’m in the psake directory (c:\dev\oss\psake) – though I only display the last part of the path for brevity, I’m on the “master” branch, I have no pending additions (+0), no pending changes (~0), and no pending deletions (-0). (I need to see if I can hack in how many commits forward or back I am from a tracked remote.) Everything in brackets is omitted if it isn’t a Git directory. Another good set of Git/PowerShell scripts is Jeremy Skinner’s PowerShell Git Tab Expansion for completing common command names, branch names, and remote names. If you are using Git and PowerShell, I would highly recommend both Mark’s and Jeremy’s scripts. If you don’t want to copy/paste them together, you can grab them from my random collection of PowerShell scripts here.

Note how we had to first call “import-module” before we could use psake. For some people, they install the latest version of psake in some well-known location, import the module, and then run it from there until the next update comes out. For others (e.g. me), we like to version psake along with our source code and other dependencies. Importing a project-specific copy of psake becomes a headache very quickly. So I wrote a little shim script to register psake, run it, and then unregister it.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake $args
remove-module psake

 

Seems reasonable enough. We simply pass along the script arguments ($args) to the invoke-psake command and everything should be fine.

image

OK. What happened? PowerShell did what we told it to. It called the function, invoke-psake, with an array as its first parameter rather than using the array as the list of parameters as we intended. Let’s fix that.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake $args[0] $args[1]
remove-module psake

 

One little problem.

image

Note that we left out the task (“clean” previously) so that psake would use the default. Rather than using the default, invoke-psake has been passed a null argument for the task. We could fix this by detecting null arguments in invoke-psake and explicitly specifying the defaults. It’s ugly because we couldn’t use PowerShell’s syntax for specifying defaults, but it would work. Another problem is that we would need to add as many $argsNo as we expected to receive arguments. A messy solution all around.

Fortunately PowerShell v2 has an elegant solution to this problem called the splatting operator, denoted by @. The splatting operator binds an array to the argument list of a function.

 

# Helper script for those who want to run
# psake without importing the module.
import-module .\psake.psm1
invoke-psake @args
remove-module psake

 

Note the subtle change. Rather than using $args we use @args.

image

Success! And it’s not just for passing arguments from one script to another. You can create arr

 image

Note the call to “Add $addends” where PowerShell called the Add function once for every item in the array. Not what we intended. “Add @addends” using the splatting operator gave us the expected result. You can even use a hashtable to splat named parameters.

image

Note that the answer was 1 (e.g. 11 % 10) and not 10 (e.g. 10 % 11). The splatting operator properly bound the value 11 to the x parameter and 10 to the y parameter, just as it was in the hashtable.

The splatting operator provides us with a tremendous amount of flexibility in manipulating function and script arguments. It’s a useful tool to add to your PowerShell arsenal. Go forth and SPLAT!

New Scrum Cartoon Coming Out This Week!

[22/03/10]

Hi,

Well I am still in India and I hear your feedback that you want more cartoons.

So… withouth further ado this week, you will see a brand new one.

It focuses on a really cool Bear, and how you can do estimation and planning.

That’s it for the teaser…. Coming soon!

- mike vizdos
www.michaelvizdos.com
www.implementingscrum.com

[21/03/10]  - Introduction to the Reactive Extensions for JavaScript – jQuery Live Event Integration

[21/03/10]  - On-Site Certified ScrumMaster Course on eBay

[20/03/10]  - IIS 7 URL Rewriter for SEO Friendly URL’s

[20/03/10]  - Rob E’s Mini-MVVM Framework @ MIX10

[20/03/10]  - TDD: Expressive test names

[19/03/10]  - MVVM, Josh Smith’s Way

[19/03/10]  - DevTeach Toronto 2010 Wrap-Up

[19/03/10]  - ASP.NET Performance Framework

[19/03/10]  - Introduction to the Reactive Extensions for JavaScript – Drag and Drop

[18/03/10]  - Essential and accidental complexity

[18/03/10]  - Organizational Barriers and Impediments to Big Scrum Implementations

[18/03/10]  - Architecture and Design Evolution

[18/03/10]  - Positive Psychology and Team Performance

[18/03/10]  - Orlando Scrum Gathering blog/link below

[18/03/10]  - Your chance to heckle me on Ignite your Coding tomorrow

[17/03/10]  - The Reactive Extensions for JavaScript Released

[17/03/10]  - What is Scrum?