Castle Project code organization
[26/02/10]
This morning I stumbled on a blog post from Krzysztof Kozmic .NET OSS Dependency Hell. The title caught me straight: dependency + .NET is pretty dear to me. Krzysztof explains a problem users of Castle Project have (hence a problem developers of Castle Project have to fix).
The Castle Project is the union of several OSS projects (ActiveRecord - MonoRail - MicroKernel/Windsor - Common Components - DynamicProxy ...). On the Castle Mission page I can read:
Castle should not be all-or-nothing. The developer can use the tools he wants to use, and at the same time, use different approaches in different areas of his application in a peaceful manner.
And here is the complain formulated by a Castle user:
I just cannot upgrade. I want to use ASP.NET MVC version 2.0 but my upgrade path is just too complicated. I have used too much OSS.
My understanding of the problem is that integrating with Castle generates maintenance friction. I have to say that I never used Castle project directly. I am going to add my 2 cents to the debate just by extracting facts from the Castle Project code base itself.
So I downloaded Castle assemblies and analyzed it with NDepend. There are 23 Castle assemblies made of 212 853 IL instructions (meaning around logical 33K Lines of Code). My first impression is that while 33K LoC reveals a huge OSS development and maintenance effort, 33K LoC can be easily compiled in a single assembly. My team have a 45K LoC assembly that compiles in 5 seconds and weights 1.5MB (it would be much more light without all embedded resources btw). Here is the graph of dependencies between Castle assemblies:
While the Graph is pretty informative, I have a tendency to prefer browsing the Dependency Matrix to understand the structure of a code base. In the Graph I removed dependencies to Tier assemblies used by Castle, because it became unreadable. But the Dependency Matrix scales easily and in this particular case, I estimate that seeing how Castle assemblies used tier assemblies is essential.
Tier assemblies usage in the Castle Project is essential because it shows that under the 33K LoC of the Castle Projects, lives a lot more OSS code (NHibernate, Boo infrastructure, log4net, NVelocity… and also Lucene and Antlr runtime not visible here because not directly used by Castle). I did an experiment: aggregating all the underlying OSS assemblies (meaning having only .Net Fx assemblies in the tier assemblies blue column) and I obtained a much larger code base made of 1 141 754 IL inst (about 175K LoC).
My 2 cents proposition is that the Castle project code base should be re-partitioned with 2 goals in mind.
- First, having a minimal number of assemblies. Here I mean that each assemblies in the new partition should have a relevant physical reason to exist. I wrote about Advices on Partitioning code through .NET Assemblies. (separating 2 tools is a logical reason, avoiding loading accidentally too much unnecessary code is a physical reason).
- Second, due to the dependencies on large tier OSS project outside of Castle (like NHibernate or Boo) the partition should be done in a way, that a large tier OSS project cannot get loaded accidentally at runtime, if the code of the Castle feature requested by the user doesn’t need it. For example Boo is used exclusively by Castle.MonoRail. Thus, here we have a need to separate Castle.Monorail because we don’t want to load the 1.5MB Boo assemblies accidentally, if not needed.
With this approach I don’t know how many assemblies would be needed, maybe 4 or 5. Certainly, such optimal partition would be incoherent with the Castle project initial approach of proposing relatively independent tools (Castle should not be all-or-nothing). Maybe some tools would spawn several assemblies, and a single assembly might contain several tools. But from the Castle user point of view, with a set of Castle assemblies reduced to something like 4 or 5 assemblies, the friction of having numerous assemblies references to maintain is gone.
As a side note, a public or internal Castle.Core.dll assembly referenced by all Castle assemblies, might or might be not needed. A deeper analysis would be required for that. Also a single CastleProject.dll assembly would be great, but chances of loading accidentally a lot of unnecessary code at runtime (because of JIT compilation) would then increase a lot.
From the Castle developers point of view, the impact of such change would be huge.
- First, they would need to synchronize all tools for each new release. I don’t know how each tools upgrade is released, but synchronizing all tools upgrade would simplify the update for users.
- Second, they would need to care for dependencies internal in the Castle assemblies themselves. But even more essential, they will need to avoid development friction generated by living in the same compilation unit. An idea would be to have more smaller compilation units used for development and unit testing, where all the source code gets integrated in some few larger compilation units for integration. And I don't mean using ilmerge here. Things must be aggregated properly.
This was my 2 cents with my external view from the Castle Project. It is not a –all or nothing- proposition, and what is essential is to propose the Castle API through less and larger assemblies. Certainly an insider might yield at this proposition arguing –some internal Castle Project constraints here– but the point is to make the life of Castle Project users easier by reducing the maintenance friction of referencing Castle.
Archives
Web Forms Routing in ASP.NET 4
[28/02/10]
At our first Sarasota Web Developer Group meeting we discussed several of the new enhancements in ASP.NET 4 Web Forms. One of my favorite enhancements is the new routing features which are very similar to the ones I have enjoyed so much in ASP.NET MVC.
Register Routes
This is old hat for those using ASP.NET MVC. Just register your routes at application startup. Rather than your endpoint being a controller, however, you associate a physical page as the handler of the request.
public class Global : System.Web.HttpApplication { void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute( "Contact_Details", // Route Name "Contacts/Details/{id}", // Url and Parameters "~/Contacts/Details.aspx" // Page Handling Request ); } }
In this case we are telling the routing engine 3 things:
- Name of the Route: Contact_Details
- The Route: Contacts/Details/{id}
- The Physical Page Handling the Request: Details.aspx
Notice the id parameter ( route value ) which will be the id of the contact to display in the details page.
Expression Builders for Creating HyperLinks, etc.
With ASP.NET MVC we have strongly-typed View Helpers to help generate links. With ASP.NET 4 Web Forms you utilize Expression Builders to create links such as:
<asp:HyperLink NavigateUrl="<%$RouteUrl:RouteName=Contact_Details, id=1 %>" runat="server">John Doe </asp:HyperLink> <asp:HyperLink NavigateUrl="<%$RouteUrl:id=1%>" runat="server">John Doe </asp:HyperLink>
We can explicitly specify the name of the route or let the routing engine figure out the correct route based on the parameters.
Getting RouteData from the Page
You can access the RouteData from the Page by accessing the Page.RouteData Property, which is just a convenient access point to RequestContext.RouteData. Here are a couple of ways to get the id from the route to display the proper contact given its id:
public partial class Details : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { var id = Page.RouteData.GetRequiredString("id"); var id2 = Page.RouteData.Values["id"]; } }
RouteParameter for use with DataSources
If you are displaying the contact in a DetailsView, for example, you can use the new RouteParameter with your DataSource to get values from the route as such:
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="FindById" TypeName="Contact"> <SelectParameters> <asp:RouteParameter Name="id" RouteKey="id" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource>
Binding your DetailsView to the ObjectDataSource will now cause the contact to be displayed appropriately.
Response.RedirectToRoute and RedirectToRoutePermanent
A lot has been mentioned about using Response.RedirectPermanent for SEO, but even cooler is Response.RedirectToRoute and Response.RedirectToRoutePermanent for working with the new routing engine. Below I am specifying the route name and passing in any route values where necessary when redirecting:
Response.RedirectToRoute("Contact_Details", new { id = 1 }); Response.RedirectToRoutePermanent("Contact_Details", new { id = 1 });
Conclusion
Lots of really neat things in ASP.NET 4 and ASP.NET 4 Web Forms. I am going to continue to post a few more we discussed during the first meeting.
For those interested, the second meeting of the Sarasota Web Developer Group will discuss a number of interesting topics: Leveraging ASP.NET MVC - Web Forms - DynamicData - Castle ActiveRecord.
Introduction to the Reactive Extensions for JavaScript – Creating Observers
[27/02/10]
Looking back to the previous post, we covered how we create observable sequences, the producers of our data. We have quite a number of ways of creating these outside of events which we covered earlier. Now that we have these observable sequences, now what? We need to address the consumer side of this producer/consumer story in the form of an observer.
Before we get started, let’s get caught up to where we are today:
Creating Observers
Let’s go back to the Observer pattern definition once again before we get started. The idea here is that we have an object, called the Observable (or Subject) which keeps a list of its dependents, the observers, and notifies each of them automatically of any state changes. In the case of the Reactive Extensions for JavaScript, we’re talking more about observable sequences. As we discussed last time, the Observer has three parts:
- OnNext – when a new value is produced
- OnError – when an exception occurs
- OnCompleted – when the observable sequence terminates
When creating an observer, we should take all three into account and how we’re going to handle them.
In order to attach these observers to our observable sequence, we can invoke the Subscribe method on our observable while passing in our observer. And when we’re no longer interested in the subscription to the observable sequence, we can detach by calling Dispose on the result of the Subscribe method.
New Observer via Create
Let’s get started in creating an Observer by looking at the Observer.Create method. This method takes in three functions, one for the OnNext, one for the OnError and finally one for the OnCompleted. This function returns to us an Observer which we can then use for subscribing.
Rx.Observer.Create( function(next) { ... }, // OnNext function(err) { ... }, // OnError function() { ... }) // OnCompleted );
Once we have an Observer, we can then attach to the Observable using the Subscribe method which takes our Observer. When we call Subscribe, we get back a disposable object with a single Dispose method which allows us to detach from the Observable.
Observable { Subscribe : function(observer) { ... } }
One of the best ways I find to explore a new API is to write tests to show the expected behavior. By writing these, I have a comprehensive view of what each method does, especially if the code didn’t come with the tests already. So, let’s create a few tests to show the behavior of creating an Observer and then subscribing to an Observable sequence. I’ll use QUnit to write my tests, and in particular, the asynchronous test feature due because we are testing asynchronous callbacks.
The first test will be to check the OnNext function parameter on Observable.Create. In this case, I’ll assert at the single value in my observable sequence is the value I receive when OnNext is invoked.
asyncTest("Observer should observe OnNext", function() { var observable = Rx.Observable.Return(0); var observer = Rx.Observer.Create( function(next) { equals(0, next); start(); }, function(err) { }, function() {}); observable.Subscribe(observer); });
The next test, I will make an example on how my OnError function parameter will work. In this case, I’ll have an Observer throw an exception via the Throw method and my OnError function check the message and assert that it’s the same as my error that I threw.
asyncTest("Observer should observe OnError", function() { var someError = "FAIL!"; var observable = Rx.Observable.Throw(someError); var observer = Rx.Observer.Create( function(next) { }, function(err) { equals(someError, err); start(); }, function() {}); observable.Subscribe(observer); });
Finally, in my last example, let’s create a simple test to show off the OnCompleted behavior. In order to do so we’ll create an empty observable which should not yield any values and instead only invoke the OnCompleted. Then we’ll create an Observer which has the test logic in the OnCompleted function parameter.
asyncTest("Observer should observe OnCompleted", function() { var observable = Rx.Observable.Empty(); var observer = Rx.Observer.Create( function(next) { }, function(err) { }, function() { ok(true, "True when invoked on complete"); start(); }); observable.Subscribe(observer); });
Creating Observers this way is good for reusability, especially if you wish to attach to any number of observable sequences. But, we’re not tied to having to create them via Create, there are other ways.
Overloading Subscribe
In addition to creating an Observer via the Create method, we also have shortcuts which allow us to create an Observer on the fly with the Subscribe method. In addition to the Subscribe which takes an Observer, we have three other overloads which can take functions for our OnNext, OnError and OnCompleted. The first overload takes a function for OnNext, where the next takes a function for OnNext and OnError, and finally the last overload takes functions for all three, the OnNext, OnError and OnCompleted.
Observable { Subscribe : function( function(next) { ... }) Subscribe : function( function(next) { ... }, function(err) { ... }) Subscribe : function( function(next) { ... }, function(err) { ... }, function() { ... }) }
This function, just as above, will create a disposable object for us which allows us to unsubscribe at any time via the Dispose method.
Unsubscribing
As I’ve stated earlier, one of the great things about the design of Rx for JavaScript is that it’s quite easy to both subscribe and unsubscribe from an observer. The design of Rx for JavaScript follows very closely to the design of Rx for .NET including subscribing and unsubscribing. Let’s step through an example of how we can use the Dispose method on our subscription. In this instance, we’ll have two observers, and after the first value has been produced, we unhook the first observer and continue listening on the second. We’ll assert that indeed the first has been unhooked while the second continues to listen.
asyncTest("Dispose should unhook observer", function() { var nextValue = 0; var observable = Rx.Observable.FromArray([1,2,3]); var disp1 = observable.Subscribe( function(next) { nextValue = next; }); var disp2 = observable.Subscribe( function(next) { disp1.Dispose(); equals(1, nextValue); start(); }); });
Such scenarios could be quite helpful in unhooking events when others happen, such as mouse events, keyboard or even AJAX requests. We’ll cover some of those scenarios in upcoming posts.
Conclusion
So, now we’ve covered the basics of creating Observable sequences and Observers and subscriptions. Now that we have some of the basics what else can we do with it? That’s where some of the LINQ combinators come in handy and we’ll pick that up next time.
This of course is only scratching the surface of what capabilities this library has and there is much more yet left to cover. The question you’re probably asking now is where can I get it? Well, for that you’ll have to stay tuned. I hope to have more announcements soon about its general availability.
What can I say? I love JavaScript and very much looking forward to the upcoming JSConf 2010 here in Washington, DC where the Reactive Extensions for JavaScript will be shown in its full glory with Jeffrey Van Gogh. For too many times, we’ve looked for the abstractions over the natural language of the web (HTML, CSS and JavaScript) and created monstrosities instead of embracing the web for what it is. With libraries such as jQuery and indeed the Reactive Extensions for JavaScript gives us better tools for dealing with the troubled child that is DOM manipulation and especially events.
Certified Scrum Product Owner Training class on May 24, 2010 in Boulder.
[26/02/10]
Certified Scrum Product Owner Training class on May 24, 2010 in Boulder.
[26/02/10] - The usefulness of interaction tests, or “How to question the method”
[26/02/10] - Castle Project code organization
[26/02/10] - Scrum Challenge #1 OVER: Scrum is…
[25/02/10] - A Vision for FubuMVC’s Component Model (gems, Nu, engines, slices, oh my…)
[25/02/10] - The Exec Problem
[25/02/10] - 3 Days To Go!
[25/02/10] - Random Thought Scrum Challenge – #1
[24/02/10] - DevTeach Toronto 2010 Ultimate Edition
[23/02/10] - Introduction to the Reactive Extensions for JavaScript – Creating Observables
[23/02/10] - Sarasota Web Developer Group - MVC and ASP.NET 4 From Scratch
[22/02/10] - How I Stole an Office and Fixed Our Daily Scrum
[22/02/10] - CST Application Process Improvement Community
[22/02/10] - Scrum Alliance National, USA: Managing Director
[22/02/10] - 10 Things I Hate About Agile Development!
[21/02/10] - XP Day Suisse, Geneva, 29 March 2010
[21/02/10] - In Praise of Middle Management
[20/02/10] - Why use Event Sourcing?