Category Archives: Architecture

Doing the wrong thing…

There are times that your customer will ask you to do something so you do it. But did you stop to ask them about why they need that done? The story here is an old one that you might have heard but recently someone gave me an addition to the story and it blew my mind.

If someone comes to you in the hardware store and asks for a quarter inch drill bit, what do they actually want? Quarter inch holes. A drill bit is just a tool and if there’s a better way to give them the quarter inch hole, give it to them.
The addition was, now ask why they want quarter inch holes… Turns out they are hanging a ceiling fan. Ask them why they are hanging a ceiling fan. Turns out they are trying to sell their house and they were told that a ceiling fan in the bedroom would help. Ask if you could come take a look… Turns out that the paint outside is faded and cracked so the reality is that when they asked for a quarter inch drill bit, really they needed 5 gallons of paint and a ladder and in all honesty, they should slap you for selling them the drill bit.

I’ve been reflecting on this a while and I’ve come to the following…

I’ve long had a philosophy that I’d like to

“Do something, anything, even the wrong thing, learn from it and iterate.”

This has been my guiding light and has helped me keep from getting stuck in analysis paralysis and drives the analytical people around me absolutely nuts (Looking at you Gary Sweeting…) 🙂
That said, I was talking to my new friend that I met at Collision Conf this week, Bas Wouterse (CTO at, and I realized that I’m slightly off there. It was an aha moment for me as I realized that I’ve been voicing my actual process wrong for years.

Here’s my new motto

“I’d rather do the wrong thing than solve the wrong problem”

The original motto still applies once I figure out what the problem is that I’m solving but it’s not enough to just start doing stuff. I was out with a young developer who asked what’s the difference between an architect and a developer. My answer was “An architect is an experienced developer who cares about the requirements phase of the project.” And that’s the case. I care about the requirements. I’ll investigate until I get a grasp of the problem. Then I’ll start doing stuff in that direction. I’m normally wrong in my approach the first several times. However, as I get more experience, I have gotten better about being less wrong about my direction even in those early phases. The trick is to quickly realize that I’ve made a mistake and fix it. I don’t try to make mistakes, however I’m not scared to make them.

The biggest mistake though, as I’m learning from my previous mistakes, is to solve the wrong problem. It doesn’t matter how correct your solution is if it’s the wrong problem. This goes from macro to micro.

At the macro level, I’ve met a ton of startups who are solving the wrong problem. Normally this means that they are solving a problem that their users have but not solving a problem that their potential customers have. Noodle on that a bit… 🙂

On a micro level, customers always tell me that X is not responsive enough or something very generic and sweeping like that. Rather than digging into the speed testing and all that, I ask them what they mean and start unpacking “responsive” means to them specifically. Often it turns out that you and your customer have very different definitions of responsive. Once you figure out what responsive means, anything that you do to solve the responsiveness issue has a much higher probability of being a step in the right direction.

Summing up, spend the time to ask that next question that will get you closer to solving the right problem.

PHP On Azure Resources

I’m at JumpIn Camp in Zurich and we’ve been diving deep into PHP on Azure. One of the things that we’ve done is talk about a ton of resources that are available out there on the web to learn more about PHP on Azure. To that end, I thought I’d collect a few of them here on my blog.

In the morning, I talked at a high level about what Azure is, how the various roles work and how to run PHP on Azure. My deck that I used was the first half of the same deck that I used on the PHP On Azure World Tour.

Another great starting point and set of resources is Maarten Balliauw’s Blog itself. He’s been helping out here at JumpIn Camp from a technical perspective on Azure and running PHP on Windows in the first place. He did the next part of the session diving deep into the PHP on Azure SDK.

You’ll notice some overlap between our desk because we’re largely talking about the same SDK and leveraging the same code examples.

Maarten’s first deck that he used to talk about Blog, Queue and Table storage is:

The second one that Maarten used to talk about SQL Azure is:

Maarten also did a demo of an app called ImageCloud leverages both a Web and Worker role to do front end uploading of an image and backend processing of that image. That code can be found at ImageCloud Azure Demo Application.

For some great resources on architecture guidance, take a look at Windows Azure Architecture Guidance. This is put out by the Patterns and Practices group at Microsoft.


Another great resource is Benchmarking and Guidance for Windows Azure. This was created and launched by the Extreme Computing Group (aka XCG).


More resources:

Microsoft Windows Azure Interop

Microsoft Interop Bridges

Windows Azure 4 Eclipse


Windows Azure MySQL PHP Solution Accelerator


I’ll be adding to these resources over the course of the week so check back for lots more.

When to use what Microsoft Client Technology

I was asked earlier when to use what Microsoft client Technology. I thought about just sending a link to Michael Schroeder’s post but decided I should put in my own thoughts on the matter first.

At the heart of Michael’s post is this table.

WPF WPF XBAP Silverlight ASP.Net + AJAX
Client Windows XP SP2 (With .Net 3.0), Vista and obviously Windows 7 Internet Explorer + Windows XP SP2 (with .Net 3.0) & Vista FireFox, Mac Safari, Internet Explorer Any Web Browser
Deployment Downloadable Installer or ClickOnce Runs in Internet Explorer secure sandbox One-time install of Silverlight plug-in Web Page
When to use Programs that need access to Windows desktop files. Intranet applications for Windows-oriented companies. Rich Internet Applications for public-facing web sites General-purpose public-facing web sites

Here’s my 2 cents on the subject.


WPF is a fantastic choice for applications that need full access to the desktop for any number of reasons. That could be full 3D support, access to desktop files and the like. You can install these applications through XCopy, a full downloadable Installer or a ClickOnce installer. Where possible, I like to leverage the ClickOnce installer as it gives some amazing benefits around auto-update and keeps my application in a secure sandbox so deployment becomes really easy.


Just don’t use XBAPs anymore. This was an attractive option for Intranet applications back before Silverlight 2 and to a lesser degree Silverlight 3. However, now that Silverlight has the power that it does with .NET and OOB options and the like, opt for Silverlight anytime you would have considered XBAPs.


Silverlight is the right choice for any external facing Applications. But that’s the key. I really look at Silverlight not as an HTML replacement but a true application layer. That’s one of the central points in the talk that James Ward and I did at Web 2.0 Expo last year –


ASP.NET + AJAX is the right choice for external facing, or even internal facing, web sites where the primary focus is information dispersal. That said, there are some amazing applications built with JavaScript in the browser.


The reality is that there are a lot of grey lines. WPF is getting a lot easier to deploy breaking down the traditional decision points between desktop and web applications. Rich Web Applications blur those lines as well and the reality is that they could be used to build a lot of applications that have historically been either written as full desktop applications or as web applications. Then on the web application side, JavaScript and the browser are getting faster, strong and easier to develop all the time so it’s becoming more of a viable application building set of technologies.

Building a Simple Photo Gallery in ASP.NET MVC Framework

image I decided to create a simple photo gallery in the ASP.NET MVC framework. The fun part is that this level of application is really the new “Hello World”. It takes less time to build than the “Hello World” did back in the day.

In this post, I’ll walk you through the process of creating this simple photo gallery with the MVC framework.

First, let’s talk a little about what the ASP.NET MVC framework is. It’s a web framework built on .NET with the principles of the MVC architecture behind it.

The MVC Architecture

MVC architecture divides the responsibilities of an application into three main components – models, views, and controllers.

image“Models” are responsible for the the data access. The data is often times in a database but it doesn’t have to be. The model could be over an XML file or whatever other data store that you happen to use. By default the ASP.NET MVC framework uses the Entity Framework. However, it can work with any data access type that returns a set of objects that the view can access. Most of the time, this will be an ORM such as the Entity Framework, NHibernate or SubSonic. In our demo below we’re actually going to just be reading in an XML file from the disk.

“Views” are responsible for the actual user interface. Typically this is HTML but it could be XML, JSON or any other number types of display/service response. Most of the time, these displays/responses are built based on model data.

“Controllers” are responsible for the actual logic. It handles the end user interaction, manipulates the data in the model and decides which view to return to the user. Simple enough? 

Creating the ASP.NET MVC Framework Project

image I started out creating an ASP.NET MVC Web Application called PhotoGalleryMVC. There are a couple of very important things to notice in an ASP.NET MVC framework project.

First, look at the Global.asax and it’s code behind. It’s got a really important method called RegisterRoutes where you define your routes.

public static void RegisterRoutes(RouteCollection routes)

        new { controller = "Home", action = "Index", id = "" }


These routes define what happens when your application receives a request. The controller is a class and the action is a method on that class. The parts after that are parameters to the method. We’ll see more with this in a few moments.

The next thing is to notice the controllers. The default method that you get in the helper class is as follows:

public ActionResult Index()
    ViewData["Title"] = "Home Page";
    ViewData["Message"] = "Welcome to ASP.NET MVC!";

    return View();

This is the default action for the controller. It’s simply setting some properties on the View and then returning it. Notice that we’re not instantiating a copy of the view and setting properties directly on it. Instead, we’re staying with the very loosely coupled method of using a ViewDataDictionary called ViewData. This is a dictionary of items that both the view and the controller have access to.

Creating the ImageModel

The first thing I want to create is a way to get the images in the first place. Rather than creating a database, we’re going to simply use an XML file as our storage for our information about our images.

Create a folder called Images under the root of the project. This will be where we put the images.

As a file ImagesMetaData.xml in the images directory following the format below. Feel free to substitute your own data in for the data I have below…

<?xml version="1.0" encoding="utf-8" ?>
    <description>Paul playing Guitar Hero.</description>
    <description>Phizzpop Signin.</description>

Add a class called Image under the model folder. For now this will be really simple.

namespace PhotoGalleryMVC.Models
    public class Image
        public Image(string path, string description)
            Path = path;
            Description = description;
        public string Path { get; set; }
        public string Description { get; set; }

All this class provides for now is a holder for the image path and description. We’ll do more with this class in the future.

The next thing that we need to do is create a way to get those images from the disk. This will be in a class called ImageModel. To make this really simple, we will inherit from a generic list of Image. This gives us a lot of functionality already. What we need to add is a constructor that will retrieve the images from the disk.

namespace PhotoGalleryMVC.Models
    public class ImageModel : List<Image>
        public ImageModel()
            string imagesDir = HttpContext.Current.Server.MapPath("~/images/");
            XDocument imageMetaData = XDocument.Load(imagesDir + @"/ImageMetaData.xml");
            var images = from image in imageMetaData.Descendants("image")
                         select new Image(image.Element("filename").Value,

All this model is doing is reading in the XML file and creating a list of images based on that metadata.

Creating the Controller

The next step is to create the controller. Again, for the moment, this will be extremely simple. We’ll do more with it in the future.

namespace PhotoGalleryMVC.Controllers
    public class ImageController : Controller
        public ActionResult Index()
            return View(new ImageModel());

Notice that this is slightly different than the default controller as it’s passing in the ImageModel. We’ll have to create the View to accept it here in just a moment.

Creating the View

Now we need to add a folder to the hold our images view in the Views folder. Now to create the view in the Images view folder, right-click on the folder and select Add View. Name the view Index.

Now that we have our view, modify it’s declaration to accept the ImageModel class.

namespace PhotoGalleryMVC.Views.Image
    public partial class Index : ViewPage<ImageModel>

What this does is set up our view based on a generic ViewPage with ImageModel as it’s base.

And lastly we need to add the HTMLish stuff to do the actual display.

<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" 
AutoEventWireup="true" CodeBehind="Index.aspx.cs"
Inherits="PhotoGalleryMVC.Views.Image.Index" %> <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% foreach (var image in ViewData.Model) { %> <span class="image"> <a href="images/<%= image.Path %>"><img src="images/<%= image.Path %>" /></a> <span class="description"><%= image.Description %></span> </span> <% }%> </asp:Content>

If you’ve ever done ASP Classic or PHP, this HTMLish stuff shouldn’t look too odd. If you strip out the HTML code, you’ve got a normal foreach loop written in C#. The bad news about this approach is that there’s a lot less controls, such as the datagrid and such, available to you. The good news is that you’ve got absolute control over the HTML that is produced.

You should notice, however, that we’re able to leverage master pages as we do in ASP.NET 2. This is great because it allows us to define our look and feel in a master page. There’s a great amount of flexibility and power in that.

Last step is that we need to add a tab on the main navigation to get to the images page. We do that in the /views/shared/site.master

<ul id="menu">
    <li><%= Html.ActionLink("Home", "Index", "Home")%></li>
    <li><%= Html.ActionLink("Images", "Index", "Image")%></li>
    <li><%= Html.ActionLink("About Us", "About", "Home")%></li>

Even though we’ve got few controls at out disposal, there are some interesting helpers such as this Html.ActionLink. This returns a link that points to the appropriate controller and action without us having to divine what that link should be based on the current routes. 

At this point, the application runs and shows really big pictures (assuming that you’ve put a few in the images folder in the first place).

Adding a New Picture

Now that we’ve got a few manually placed a few of the pictures in the folders and gotten them to display on the view, we need a way for the user to add their own pictures to the site. We’re going to do this one in reverse order where we create the view and work backwards from there.

Step one is that we need a new view and a way to get to it from the images page. We can accomplish that with a simple Html.ActionLink in the Image Index view.

    <p><%= Html.ActionLink("Add your own image", "Upload", "Image")%></p>

Now we need to create the view for the New action. Simply right click on the View folder and select Add|View. Name this view “Upload”.

In the view, we need to create a form that will do the post.

<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" 
AutoEventWireup="true" CodeBehind="Upload.aspx.cs"
Inherits="PhotoGalleryMVC.Views.Image.Upload" %>

form method=”post” action=”<%=Url.Action(“save”) %>enctype=”multipart/form-data”>
input type=”file” name=”file” />
    <input type=”submit” value=”submit” /> <br />
    <input type=”text” name=”description” />

It’s not the world’s prettiest form but it’s functional. Notice the action on the form tag. It’s using another helper called Url.Action. This maps to the same controller but a different action.

Now we need to add the upload and save action to the controller. The Upload action is very simple. It simply returns the Upload view. The Save is a little more complicated as it has to do the actual logic of getting the files and descriptions and putting those on the model.

namespace PhotoGalleryMVC.Controllers
    public class ImageController : Controller
        public ActionResult Index()
            return View(new ImageModel());

        public ActionResult Upload()
            return View();

        public ActionResult Save()
            foreach (string name in Request.Files)
                var file = Request.Files[name];

                string fileName = System.IO.Path.GetFileName(file.FileName);
                Image image = new Image(fileName, Request["description"]);

                ImageModel model = new ImageModel();
                model.Add(image, file);
            return RedirectToAction("index");

The important part here is that this controller is not actually doing the logic of saving out to the disk. This is important because it gives us the flexibility to alter the model switch from file based storage to a database and so on. This separation is key to the success of the architecture.

Last thing to do is alter the model to actually save out to the disk.

public void Add(Image image, HttpPostedFileBase file)
    string imagesDir = HttpContext.Current.Server.MapPath("~/images/");
    file.SaveAs(imagesDir + image.Path);

    XElement xml = new XElement("images",
            from i in this
            orderby image.Path
            select new XElement("image",
                      new XElement("filename", i.Path),
                      new XElement("description", i.Description))

    XDocument doc = new XDocument(xml);

    doc.Save(imagesDir + "/ImageMetaData.xml");

The LINQ makes creating the XML document really simple.

There are a lot of optimizations that could be done here such as storing off the model in memory and the like so that we’re not constantly reading/writing to the disk and the like. That’s not the point of this exercise. The point here is to work with the MVC framework.

At this point we’ve got a functioning image gallery with uploads and a view.

In my next post, I’ll alter this to serve up thumbnails and give a nicer user experience.

Architecture of RIA from JAOO

Josh Holmes and James WardI did a joint session with James Ward from Adobe at the JAOO conference. As you know I’m an evangelist for Microsoft focusing on RIA and UX. James is one of the Flex evangelists for Adobe.

This is a talk that James and I have been talking about trying to pull off for quite a while and I was thrilled that we actually got to do it. James and I have been going back and forth for over a year and a half now talking about the definition of RIA as well as what are the best and worst architectural patterns. Some of this was based on an article that James co-wrote for InfoQ called “Top 10 Mistakes when building Flex Applications”. I borrowed the mistakes that applied across the board regardless of what RIA technology you were using and added the best practices part.

The first time that I delivered a version of this session it was with Mike Labriola at RIAPalooza.

Architecture of RIA from JAOO

View SlideShare presentation or Upload your own. (tags: josh holmes;james)


James and I both welcome emails and contact – email addresses in the slide. You can also comment on the blog. We’ll both be watching the comments here.


The first question that we have to ask ourselves is – what do we mean by RIA?


The acronym could mean anything.
It could be the Rural Inoculation Association whose out there in the world trying to immunize all of the cows and chickens in the world.
If could be the Rare Isotope Accelerator – you know, the one in Switzerland that didn’t end the world… Yeah – I’m thrilled about that.
All the way down to Really Inane Acronym – which is the one I often go with.

But in this session we’re talking about RIA as Rich Internet Applications. This means that we are not talking about simple media players or fancy splash screens or advertisements. We are talking about solid enterprise quality applications that leverage the Internet as a deployment model and typically are built on one of the Rich Internet platforms such as Silverlight or Flash. These are meant to enhance the user’s experience and if you do a good job with design, you will dramatically improve the usability of the application. This is what James and I are both passionate about.


Before you decide that you need to build a RIA, you need to first think about your users and how they are going to use the application. This will help determine where on the continuum of user experience you should target and what type of application you should write.

On the far left hand side, if you are going for absolute ubiquitous reach and need to have information in front of the widest possible audience, text over http is the lowest common denominator. HTML and CSS will still have a long and prosperous life on this end of the spectrum.

On the far right hand side, the guys that were writing Halo for the XBox 360 were able to test the exact hardware, right down to how fast the hard drive spins and which exact video card was in the machine. This means that they are able to make trade offs between how large a map is and what textures are on the walls and so on. This is a huge advantage when trying to create a really rich experience for the users.

However, most people don’t have the luxury of shipping hardware with their software. But can you target a desktop application on a given operating system? Or a family of operating systems?

If you can’t, then you need to start looking at this supplemented web space that we are talking about with RIA.


So what’s different with this RIA development. Really it depends on the skill set of your team. The interesting part is that it’s actually a much tougher jump for web developers than desktop developers.

For desktop developers, there are a number of things that they have to get used to. For example they are locked into a secured browser based sandbox. This means that they can’t do a lot of the things that they are used to doing such as reading and writing anywhere on the hard drive, reading from the registry, accessing local hardware or any number of other typical tasks that desktop developers do.
The back and the refresh button are also quite scary to the desktop developer. Conceptually, you can think of it as opening up the task manager and killing your application. Oops. The question is what do you do when someone does that? There are a lot of differ
ent strategies that you can leverage but the point is that you do have to think very clearly about this potential issue.
State management is also an issue that we have to think about more. On the desktop, it’s natural to just have your state locally. But in this RIA space, what do your users expect if they open up a browser on a second machine? hmmm. You might need to store your state on the server side.
And typically you have a more limited runtime in the browser than you do on the full desktop. For example, the full .NET runtime is about 50 meg and Silverlight is just 4 meg. That’s quite a difference.

However, none of these issues are fundamental shifts in how you think or go about doing your job.

For web developers, on the other hand, there are some serious mind shifts that have to happen. We are used to, as web developers, having everything from the server on hand at any given time. The UI itself is simply rendered HTML. All of the logic and work happens on the server. Often, this happens in a single tier.

Now that we are looking at the RIA space, we have to think about where the business logic goes. Sometimes that’s on the UI side running in the browser. Fundamentally this means the web developer needs to understand service oriented architecture. This is a big change from what we are used to where we could, if we so desired, open up a database connection and query directly from the UI logic layer. Instead, our UI logic is happening out in the browser where they don’t have access to do that through the firewall etcetera.

Ron Jacobs talks about a lot of the possible issues in a set of talks called SOA Patterns that can be found and

For example many people look at mapping their database directly to their web service tier. This is the anti-pattern that Ron calls the CRUDy web service layer. Really, you are not service orienting your application. Rather, you are simply exposing the database tier out to the rest of the world.

Once you get over this hump, the rest of the changes are relatively small in comparison.


Onto the best practices… We have laid out 10 best practices here. These are not by far the only solid practices. These just happen to be the 10 that James and I thought were in the top 10 that are across the board regardless if what RIA platform you are using.



Those couches, no matter how pretty they are, are not amazingly comfortable. The primary point here is that your application has to be functional and usable or nobody will use it regardless of how pretty it is or what technologies you are using.


The easiest way to make sure that you are building a functional application is to focus on the architecture.

This is a picture of from Taliesin West, Frank Lloyd Write’s winter home in Arizona. He spent a lot of time working on the overall architecture of the building and the looks of the building. 


However, he spent almost as much time on the inside. He built much of the furniture, designed the lighting, the flow of the rooms, the acoustics and much more.

The lesson that we can learn from this is that we should spend as much time on the inside of our application and the architecture of the client side as we do on the overall application. You really need to apply a lot of rigor to the architecture of the client side as well as the overall application.

There are two client side architectural patterns that are the front runners that we should talk about.


First in the MVC or Model, View, Controller pattern. The idea here is that you have three separate layers with very distinctive roles.

The model is the first layer that we need to talk about. It reflects your web service layer, not the database but what’s returned from the services. This is the only access layer to the services and hides away the details of which services, protocols, security and other details from the other layers.

The second layer to talk about is the controller. This is the logic. It makes the decisions as to which view is shown, what data is changed in the model and so on. It watches what’s going on in the view for various events and responds to those events by making updates in the model.

The third layer is the view. Often, there are multiple views for a given controller. For example, there might be a complex and a simplistic rendering of a given item from the model. The view is very thin as far as logic goes. It watches the model for changes and updates itself based on those changes. Those changes are either the result of logic in the controller or from a web service call. Often, in Silverlight, WPF or even Winforms, this watching for changes in the model is often implemented as data-binding. As it’s data-bound to objects, the view can decide on what attributes of the object it wants to show.


The second pattern to talk about is the MVP or Model, View, Presenter pattern.

The first layer, the model, is actually very similar.

The second layer is the presenter. One of the big differences here is that the presenter actually updates the view with the changes from the model rather than the view watching for those changes. The result here is two fold. First the view are much closer tied together. The second is that, since the presenter is doing all of the input and output, it’s easier to unit test.

The last layer, the view, is much thinner then in the MVC pattern. It’s simply a presentation of the data that the presenter has chosen to show.


I personally prefer the MVC pattern. I don’t thing that the extra testability that you get out of the MVP is not worth the loss in flexibility in the view. Unit testing is still quite possible in MVC and definitely should be part of the process.



The second best practice is that you should have a set of design tenets that the team shares. Really these are values that should be held by the designers, developers and all of the stake holders. This has to be agreed on by the team at the beginning of the project. I actually like to do two. One for the UI layer and the second for the overall development process of the application. For example, in the UI layer, Search is Failure. This means that if the user has to hit search in order to find something on your web site in the course of normal navigation – you failed in the design and navigation of the application. On the development side, think about TDD or Test Driven Development as one of the tenets that you hold.



The third best practice is to use the appropriate level of fidelity for the user’s context. There are couple of things to talk about here.


The first is when you are developing a prototype. If you bring in an amazingly beautiful wire framed application with a ton of colors and full animations the user is going to do one of two things. Either they are going to argue with you over the exact shade of red or some other little detail without really getting through the functionality of the application or they will say – cool, you’re done. It’s really hard to explain to a non-technical person the difference between a good looking prototype and a a finished application and why it’s going to take 9 months to make that leap.

The answer to this is to use a set of printed mockups for the look and feel and a skin such as ProtoXAML for the running prototype so that you can work through the functionality without getting into the arguments about look and feel. 


The second item to talk about with regards to fidelity is the forest for the trees. This means understanding the user’s context and only showing them the data that they need in that context. For example, if you are dealing with a C level executive you shouldn’t show them how much it costs for a particular pencil. Instead, you should only show them how much it costs for office supplies in general. If the want to dive into that detail, then you should let them dive into that level of detail. Another thing to think about is what should be on a dashboard verses in the full application or report.



The fourth best practice is to build with both the customer and user’s input. Step one here is to recognize that these are indeed separate people. The customer is the one who is signing the checks. Often this is some layer of management far removed from the actual day to day operations that the users are doing. The users are the ones that are actually going to be using your application and getting upset with you about the things that don’t work the way that they want.

This is one of the central themes in most agile methodologies. Most actually want to have one of the users on the development team sitting in the meetings and providing input the entire time.



The fifth best practice is to understand who your users are and what type of users you have. For example if you have a public facing web site, you’ll have something like the curve in the slide with some large percentage of your users being first time visitors to your site, some smaller percentage being repeat visitors and some really small percentage being your power users.

Your goal should be to turn those brand new to the site into repeat visitors and then into power users. For those that are brand new to the site, you need to explain what your web site does and why they want to come back. One the other end, the power users shouldn’t be bothered by that introductory information that you present to the new visitors. A couple of sites that do this really well are WordPress and Twitter.



Sixth is planning for concurrency. Concurrency is always an issue in application development, it’s just highlighted in RIAs as the client is running somewhere on the network or across the Internet in the client’s browser.

The fun issue is with concurreny is that it’s hard to test for in development because typically the developer has their own dev environment and/or a database full of junk test data. This makes it hard to spot concurrency issues. Instead they find these issues in training when the trainer asks the 30 students to open up Mr. Jones and change his address and save. At that point, what happens? Which of the users actually saved the new address successfully?


There are two basic forms of concurrency. Optimistic and Pessimistic.

Pessimistic includes locking down the rows that you are accessing until you are finished with them. It really isn’t a consideration in RIA as you don’t have a long running transaction with an open connection to the database.

However, simple last in wins optimistic concurrency is really not concurrency either. You need to think through the various scenarios and understand where you need to detect that there was a change and then decide on what to do with that change. In order to detect that there was a change, the traditional strategies are to either pass both the original version of the data that you retrieved in the first place as well as the changes or to use a timestamp of some sort. As far as what to do with the change, you might be able to perform logic to make the determination on what to do such as if there is an addition or subtraction of some numerical amount. Most of the time, however, you need to raise awareness to the user that there was a change and have them decide what to do. Other times you need to think about doing some type of escalation to a manager. Obviously that requires more development and thought but it’s worth the time.



The seventh best practice is balancing the computing load. Think about the fact that you’ve got the ability to do a lot of logic client side and you can offload the computing load on the server that way. However, there’s still a lot of good reasons to keep the logic server side. The question is what’s the decision tree on where the logic should run. My preference is to keep the operations as close to the data as possible. If most or all all of the data that you need is client side, there’s no reason to burn the extra network traffic and time waiting on the the round trip. On the other hand, if the majority of the data is server side and you can process the data and just return the results of the processing – do that.



Security is a huge issue and really hard to get right. If you make it too tough, people will find ways around it or stop using your application all together.


Both Silverlight and Flash have security protocols around calling web services. It’s based on the domain that your application was loaded from and what domain the application is trying to call the service on. If you were loaded from the domain you’re trying to call then there are no security issues. The domain is defined as the combination of the domain name (including sub domain such as www), protocol (http or https) and port (such as 80 or 8080). If any one of these are different, then it’s considered a cross domain call. That means that is different from and is different
from The reason behind this is that any of those variables, sub domain, port or protocol could point to different servers. That possibility of changing servers is considered a cross domain call and more security kicks in.

The reason that this matters is that when the application makes that call, all of the cookies for the domain that you are trying to call are passed along with the call. This is not an issue if you are calling a server that doesn’t have private information such as the public web services on Flickr or book searches on Amazon and the like. However, this is a huge issue if the application can call some outside domain that does have private information, such as Paypal or your hospital or some other server that has sensitive information, and pretend to be you by passing in those cookies.

Since it’s the server that knows whether or not it holds sensitive data, the server gets to decide if it is going to allow that call. The method for doing that is a policy file. The Adobe version of this file is the crossdomain.xml and the Microsoft version is called the clientaccesspolicy.xml though Silverlight will leverage the crossdomain.xml file if it doesn’t find the clientaccesspolicy.xml file. In these policy files the server can specify which domains, from all down to a very specific one, are able to call which services.


The quick dos and don’ts for your server that you’re expecting RIA applications are divided into private services that your own applications are going to call and public services that you are opening up to third party applications to call.

For private services:
Do use browser-based authentication through cookies, HTTP Auth and so on. This will allow your application to leverage the existing authentication methods that you are using with the rest of your web applications. This is a big win.
Do not, since these are private services that are using browser based authentication, enable public access via a cross-domain policy file of any sort.

For public services:
Do not use browser-based authentication. You can either just open up anonymous access or pass in the credentials on each of the service calls and use more traditional authentication methods from the SOA world.
Check on the calling application’s URL and other authentication techniques.
And definitely separate out the public from private services into different domains but at least subdomain or something.



If you’ve not spent any time in a support center answering calls from irate users, you should. It will change you’re outlook on writing software, logging, bug reporting and more. Now, let’s have the application running out there in a browser in a secured sandbox so that you’re users don’t have direct access to any log fine and the issues that they might run into could be network issues and you wouldn’t able to log errors on the server side.

Do you see the problem? 

One technique to deal with this is to code for a parameter that the user can pass in on the url that will bring up an error console that the user can read back to you. For the error log, you can store the errors in a cookie or local storage.

The point is that you need to think long and hard about supportability and what could possibly go wrong and how to handle it.



Very importantly, you have to keep your user’s context in mind. Are they mobile? Disabled? What role do they play? How are they going to be using your application? What’s the minimum data that they need to accomplish their duties?

By remembering your user’s context you can build the most effective application for them in their unique situation.


I’d love to hear about best practices that you’ve uncovered in your work as a RIA designer, developer and architect.


I learn best from my failures and the best practices wouldn’t be best practices if there weren’t worst practices. As such, I don’t think that any best practices talk is complete without addressing the possible worst practices.



The first possible worst practice when creating a rich internet application creating a rich internet application in the first place. You shouldn’t use 2.0 technologies to build a 1.0 web site. HTML, CSS and light javascript can go a really long ways in creating a beautiful site that’s rich with information. You have to think about the user’s interactivity and context when picking the technologies that you’re using. We are all gui
lty of finding a slick technology and picking it as our hammer going around making every problem a nail.



Many RIA applications forget about the page refresh and back button. By default, when the user hits refresh, the application unloads, reloads and starts over from the beginning forcing the user to navigate back to the where they were in the first place. By default when the user hits the back button, the page with the application in it is unloaded as the browser goes back in the history to the previous page. In either case, this is probably not what the user expected.

If they were using a traditional HTML based web application the refresh would simply reload the page that they are on. If there was a postback involved, it will even offer the user the possibility to repost those variables to get the same result again. You can, if you write code to handle it in the unload and load of the application write out the state on unload and recreate the state on load.

For the back button, things are little bit more complicated. One way you can handle this is to build a state machine that tracks the logical pages in your application such as the pages in a wizard. Then you can trap the back button event and unwind the state machine. If you are at the beginning of the application, let the event go and act as normal.



The first thing that a lot of people think of when they start thinking about those challenges with regards to the back and refresh button is to simply disable them rather than going through all of the effort of handling them. This is a choice but if breaks the way that users expect to browse on the web so doing do it.



Ignoring your bandwidth is another large mistake that people make. There are a couple of different ways that this happens.

To start off, you need to think about the size of your application and how that will effect load times. A lot of desktop applications are many meg in size. This is fine since you are not having to download the application to run it over and over again. If this is the case with your right internet application, you need to think about partitioning your application to optimize load times. The simplest example here is to make sure that you don’t embed assets such as videos or images inside your application unless you absolutely need them on startup.

More advanced techniques include partitioning the application itself into multiple easily digestible parts.

The second thing to thing about is video streaming if you are using video.

One more area for concern is the amount of data that you are pulling back at one time. There are a lot of different paging techniques that you can employ with easily implemented patterns.



There are good ways and bad ways to leverage animations. Many times there is gratuitous animation that have been thrown in just because they can.

The good is when a particular animation helps the user visualize data in a unique way or leads the user to the next action. For example, you can, when all required fields are filled in, add a shimmer behind the “next” button to draw the user’s eye to guide them along the way.

Another example of good use of animation is showing transitions in state or data. As Mike Labriola put it, if your user rolls a ball and it just disappears as it leaves their hand and appears across the room, they would be very surprised. By showing the state transforming through animation, you can show your user what happened.



We, as developers, are infamous for NIH (Not Invented Here). There are, even with a limited framework, a tremendous amount of utilities in the framework that you don’t have to reimplement. There are a lot of possible issues with not leveraging the framework that you’re running on. First, you have to maintain it. But the other issue that is more unique to the RIA world is that the user has to download this code when they run your application. This bloats the application and contributes to the other worst practice that we already talked about with ignoring your bandwidth.



Cowboy development is always a worst practice. The problem is that there are times that people get away with it. And that makes them bolder and bolder. “It’s just two lines of code. A tweak really. I’ll just make that on the production server.” Tweaks have brought down more servers than major production roll-outs. The major changes have been through testing and QA and all sorts of engineering rigor. The tweaks has at best been reviewed by the guy sitting in the next cube.

With RIAs, we are building real production applications and we need to apply the same disciplines that we should for any other application development. That includes Source Control, Change Control, Bug Tracking, solid development processes, TDD, Continuous Integration and the whole kit and caboodle.



In laying out the application’s interface, it’s really easy to get carried away with the number of containers to control the exact positioning of the items on the screen. In the HTML world, we did this with tables until we were all told that tables were evil. The answer was to switch to divs and put divs inside of divs and so on. This proved not to be any better. The real answer was to use CSS to set the relative positioning of the items.

The same idea applies in the RIA technologies. The more containers that you use to create your layout, the more constricted it will be. 



If you have a really complex rendering of a given item, that’s not necessarily a bad thing. However, if you take that same item and databind a thousand of them into list – now you have a problem.



Getting religious about your technology decisions is a really common and really horrible practice. You should evaluate the possible technologies on their technical merits rather than on feeling, personal biases or any other non-technical method.

Instead you have to determine if the technology will actually do what you need for it to do, what your team make up is and if they will be able to leverage the technologies, if the IT department will be able to support the roll out of the application and all of the other technical merits of the chosen platform, technology and so on.

This is the point in the talk where James Ward (remember that he’s from Adobe and I’m from Microsoft) came over and hugged me on stage!


I’d also like to, just as I asked for your best practices, hear from you about worst practices that you’ve found over time.



Rich Internet Applications are meant to enhance the users experience. Poorly designed applications don’t accomplish this goal. Furthermore, we’re probably going to face a period where we have a lot of “Silverlight Blink” so called for the HTML Blink tag that annoyed us all for so long. We all need to champion user centered design to ensure that we are building the applications that will help, not hurt, the user.

The architecture of the client matters, especially now, as much as the overall client. It helps with testability, maintenance, flexibility, changes and a ton more.

These are real applications and should be built following the best development practices. This includes all of the engineering rigor that any enterprise quality application is built with. This includes change control, feature and bug tracking, TDD, continuous integration and the whole ball of wax.

Don’t rewrite the framework that you should be leveraging.

Leverage user centered design techniques. There are a lot of great resources out there that you can tap into to learn more. I’ll follow up with a post about that in the near future.

Take religion and emotion out of the technical decisions that you are making. Evaluate technologies for their technical merit and choose the one that’s going to work best for your team


Be sure to subscribe to James and I’s blog. Follow up with us with questions. Let us know how you’re leveraging the RIA technologies. We’d love hear it.

I enjoyed giving this talk and thank Mike and James for joining me in presenting it.

ArcReady – the Soft Skills

As an Architect, you should be more than just a technical guy. Your job is to be the liaison between the technical side of the world and the business side of the world. You need to be able to effectively communicate with all sides and understand the motivations of the different parts of the business.

To that end, this quarter’s ArcReady is here to help you. This quarter, your local Architect Evangelist will discuss the soft skills needed to perform the job of an architect and how to gain those skills.

Here’s the official text from the invite:


Microsoft ArcReady

Professional Patterns on the Job

You’re smart. You deliver. What more could your company want from you?  Why don’t they come to you for the big technical decisions? Why won’t they listen to your proposals? It seems like everyone has an agenda and they’re doing everything they can to kill your great ideas.

Join us this quarter as we focus on the soft skills that architects need to master. Learning these skills will boost your emotional intelligence and help you become a more professional, well rounded contributor. You’ll gain insight into the architect’s role as leader, influencer, and business professional and learn how to leverage your position to become a positive force within your organization.

Session 1: Mastering the Soft Skills
In this session, we’ll discuss key interpersonal skills and how they can affect your projects and career. We cover how to positively connect with humans, how to participate in and influence the business processes you support, and how to transcend your technical role and maximize your connections with all members of your organization.

Session 2: Organizational Dynamics
This session examines the dynamic nature of large organizations – their structures, decision making processes, and political landscapes. We’ll discuss the goals of key business and technical decision makers and their influence on architects and software projects. We’ll conclude with some strategies for maximizing the soft skills from Session 1 to ensure successful outcomes for your projects and career.


  • A forum for aspiring and practicing architects to discuss industry trends
  • An overview of Microsoft’s roadmap as it relates to software architecture
  • A mechanism to solicit your feedback
  • An opportunity to showcase the work you do!


Architects and Senior Developers who are interested in becoming an architect.


Events are held in 19 cities across Central Region.  To register for this event, please visit


  • Omaha, NE November 4, 2008 9:00am – 11:45 am
  • West Des Moines, IA November 6, 2008 9:00am – 11:45 am
  • Bloomington, IL November 11, 2008 9:00am – 11:45 am
  • St. Louis, MO November 12, 2008 9:00am – 11:45 am
  • Waukesha, WI November 13, 2008 9:00am – 11:45 am
  • Overland Park, KS November 13, 2008 9:00am – 11:45 am
  • Knoxville, TN November 17, 2008 9:00am – 11:45 am
  • Franklin, TN November 18, 2008 9:00am – 11:45 am
  • Downers Grove, IL November 19, 2008 9:00am – 11:45 am
  • Dallas, TX November 20, 2008 1:00pm – 3:45 pm
  • Indianapolis, IN November 20, 2008 9:00am – 11:45 am
  • Minneapolis, MN November 20, 2008 9:00am – 11:45 am
  • Southfield, MI November 25, 2008 9:00am – 11:45 am
  • Mason, OH December 2, 2008 9:00am – 11:45 am
  • Houston, TX December 2, 2008 9:00am – 11:45 am
  • Independence, OH December 3, 2008 9:00am – 11:45 am
  • Columbus, OH December 4, 2008 9:00am – 11:45 am
  • Austin, TX December 4, 2008 9:00am – 11:45 am
  • Chicago, IL December 9, 2008 9:00am – 11:45 am

          9:00am – 11:45am 


    This is definitely going to be a great session with content that you’re not going to get anywhere else. Register and report back what you learned!

    Domain Specific Languages (DSL)

    image I’ve been spending a lot of time recently looking at DSLs. That’s not on purpose, it’s just happened that way as I’ve been to a number of different conferences, such as Central Ohio Day of .NET where Jay Wren was talking about Boo and DSLs. I’ve also been in on a lot of discussions with Joe O’Brien and others about them.

    From Martin Fowler – “The basic idea of a domain specific language (DSL) is a computer language that’s targeted to a particular kind of problem, rather than a general purpose language that’s aimed at any kind of software problem.”

    There are a number of DSLs that we use every day. One of them that Joe likes to reference is:

    no whip
    2 pump
    white mocha

    Obviously (hopefully), this is Starbucks’ DSL that they use. This is a very efficient way for the Starbucks employees to communicate. The cashier starts by taking the order and transmitting that order to the person working the espresso machine who fulfills the order and passes it on to the customer. I usually understand it when they hand it back to me even though it sounds little to nothing like what I said to the cashier in the first place.

    Domain – Every Domain has a their own vocabulary and dialect. Think about the medical field, banking, real estate, investments, mathematics, zoology, chemistry, grocery stores and on and on. Everyone of these has a way of communicating that the outside world has to understand to understand them.

    Specific – these vocabularies and dialects are specific to the domain that they are in. In fact, the various terms don’t transfer from domain to domain. As an example, if I’m say Prime to an investor vs. Prime to a butcher, they are going to have completely different ideas as to what I’m talking about.

    Language – this specific vocabulary in each of these domains is about communication quickly and efficiently. It’s a language all unto itself. Now, the majority of these are created from within the languages that we speak on an every day basis – such as English or French. Some of them have a touch of Latin thrown in but for the most part, they are local centric.

    In very much the same way, DSLs in the software world are created, typically, from a language that already exists. There are languages, such as BOO, where the point of the language is to make it easy to create DSLs. There are other languages, such as Ruby, that make it very easy to create DSLs (see Joe’s talk on referenced at the end of this post). This is one of many reasons that I’m geeked about IronRuby.

    image It’s always a good thing when the programmers and the users are speaking the in the same languages. This makes sure that you’re in lock step on the requirements and what the application is supposed to do. I’ve seen time and time again where an application does exactly what the programmer intended for it to do but communications issues mean that they had no idea what the user actually wanted or needed.

    The first time I was introduced to the topic was when I was writing banking software. We went through a lot of hoops to make sure that we were speaking in banking terms when talking to the business analysts (BA). This was a struggle for a lot of the compsci majors just out of college that were amazing programmers but couldn’t understand the business rules. Part of the problem was that the languages that we were writing in, while we had class and method names that mapped, was still the computer science language verses something that we could show the BA. I always wanted to make it a requirement that the programmers had to work as a teller for a week every couple of years so that they could understand the business. You think I’m joking, but Anheuser Busch makes all of it’s employees, from brewers to architects to executives go to brewing school. Dominoes Pizza makes everyone go through the line training to learn how the pizzas are made. There are a lot of these examples, but not nearly enough all at the same time.

    We are getting the tools, however, at this point where the language that we write in can start to become the interface with the BAs and the users because we can write it in such a a way that they can understand it.

    To get a fantastic primer on DSLs and see them created in Ruby, go watch Joe O’Brien‘s talk that he did at Mountain West Ruby Conf called Domain Specific Languages: Molding Ruby.

    Other things to check out:

    Martin Fowler on domain-specific languages
    Creating Domain-Specific Languages

    MIX Day 1 Keynote Ray Ozzie

    Ray OzzieRay Ozzie kicked off the MIX keynote by talking about the fantastic new things that have happened at Microsoft in the past year that are really re-engineering the DNA at Microsft from the acquisition of Aquantitive to the fantastic internal work with Silverlight 2.0 and IE8. As an employee in the trenches, it’s often hard to keep focus on that big picture and remember that the company is aggressively self critical and self correcting. Another great step we are trying to take, that Ray touched on a little bit, is acquiring Yahoo!. It’s interesting, but even in the field, I’ve seen that just the fact that we’ve made an offer has had a profound effect on a lot of people and is driving us into new and interesting directions. After that, he talk about the big picture and the directions that Microsoft is going with Services and Advertising and how that fits into the big picture of our S+S message. By Services, he’s talking about software services in the Cloud (internet/network) rather than consulting services.

    The next huge point that he talked about is the idea of software above the level of a single device. Our users are starting to leverage intelligent devices of all types from phones to desktops to cars in every part of their lives. We need to look at how to really leverage the strengths of each of these devices and platforms.

    There are 5 buckets that we can think about these services in the cloud.

    1. Connected Devices
      • The vision here is that we will have applications and services that span
    2. Connected Entertainment
      • The vision here is that we would only have to license our software and media once and be able to use that across all of our devices from our music player to our desktop or car. This is a great
    3. Connected Productivity
      • The vision here is that we will have a seemless experience from the desktop to the mobile device to the web with Office Desktop, Office Mobile and Office Live (web based).
    4. Connected Business
      • The start of the vision is a set of services from online CRM, financial services, hosted exchange, commutation services and even hosted SQL Server with an elastic type cloud supporting it. The long game is enabling utility computing in the enterprise where people will virtualize more and more of their infrastructure onsite and in the cloud.
    5. Connected Development
      • We have a ton of different scenarios that we can code to with the same skill set of .NET and XAML across many different platform. That’s exciting. 

    Personally, I’m really excited to part of the company with an end to end vision that is as complete as the one that Ray was able to lay out today.

    Technorati Tags: ,,,

    Microsoft ArcReady – Software + Services

    imageThis quarter’s ArcReady is coming quickly. This quarter we are talking about Software + Services (S+S). This is Ray Ozzie’s vision of the future of the industry. It’s a vision that encapsulates SOA, SaaS and Web 2.0 and really takes it to the next level. SOA can be how you compose, govern and control your services but it doesn’t talk enough about delivery of the software to the user. SaaS is a great way to deliver software if your users are willing to rent the software. It A: doesn’t work for every user base and B: doesn’t address multi-headed clients where you might want a desktop client, web client and a mobile client. Web 2.0 is in the same boat. Web 2.0 can define the user’s experience with RIA, collaboration, collective knowledge and more. These tenants of Web 2.0 that we discussed in the last quarter (See the video of the session posted on the ReMix07 Boston site) are engaging on a number of levels but it doesn’t really address some of the enterprise concerns of security, accountability and more.

    Software + Services really builds on top of all three of these ideas. Come learn more in a city near you.

    For the full abstract – see

  • *Columbus – 11/27/2007
  • *Cleveland – 11/28/2007
  • *Detroit – 11/29/2007
  • **Grand Rapids – 11/30/2007
  • *Nashville – 12/3/2007
  • *Cincinnati – 12/5/2007
  • *Indianapolis – 12/6/2007
  • **Louisville – 12/6/2007
  • Minneapolis – 12/11/2007
  • Milwaukee – 12/12/2007
  • Kansas City – 12/13/2007
  • Chicago – 12/14/2007
  • St Louis – 12/14/2007
  • Dallas – 12/17/2007
  • Houston – 12/18/2007
  • Austin – 12/19/2007
  • * means I’m speaking…
    ** means that we’re actually doing a last quarter’s Web 2.0 session followed by this quarter’s Software + Services session. They go well together and I missed Louisville and Grand Rapids last quarter.

    That’s going to be a tough 2 weeks on the road there to be honest. 12/7 – come to my funeral as I die from Red Bull overdose. 🙂

    Microsoft ArcReady – Downloads

    Technorati Tags: ,

    More Platforms verses Applications

    I blogged last about Platforms verses Applications and put the statement out there that platforms beat applications every time. At least one of my readers (Alan Stevens) agrees with me. My other reader hasn’t commented yet.

    Alan pointed out, however, that I missed some rather important platforms that Microsoft ships. One that all of the developers in the audience (on the Microsoft technology stack so that doesn’t include you Joe) probably use on a daily basis is Visual Studio. Alan posted about it in his post on VSX. He points out Visual Studio itself is just a shell and that all of the other bits that you see are simply add-ins. That shell is now available for you to leverage as you see fit in your applications. Obviously, you can write add-ins such as the Dotfuscator from Preemptive Solutions or CodeRush. What you probably didn’t know is that you can build a stand alone application there that you ship independently of anything else. The cool part about that is that you have a built in extensibility model and other applications can meld with yours because you’re on top of a great platform. 🙂 Don Demsak, aka DonXML, had a podcast about Visual Studio Extensibility back in April.

    Mappoint and Virtual Earth, despite my recent jolly adventure with Mappoint, is a fantastic platform for building applications on top of. One of the local companies here in Michigan is using it for the base for one of their applications called eoStar. I find it fun that they have built their application as an extensible platform as well – see their plug-ins section for things that third parties have built for their applications.

    Microsoft DynamicsThe Microsoft Dynamics CRM is a horizontal base platform for you to build vertical applications on top of such as Omnivue’s Health Care application. There are multiple ways to integrate here from API calls to interfaces you can implement to web services that you can leverage.

    I know that I’m missing some of the important applications out there that Microsoft ships as a platform.

    So, what does this mean for your applications? There are two directions that you should be looking.

    First, when you are starting a new application – is there something out there that you can leverage as the base for your application that will handle a lot of the underlying plumbing. I like Brian Prince‘s quote – “Don’t be a plumber.” What he’s talking about is leveraging platforms and frameworks that will do a lot of the heavy lifting for you so that you can concentrate on your business logic which is your real value add.

    Second, you need to be thinking about what are the possible extensibility points where someone else could tap into your application. I know, you’re thinking – but Josh, I’m building the corporate equivalent to Notepad here – there is no extensibility points. While that may be true – think about your favorite text editor here and what add-ins you’re using. If you are still using Notepad – you are in the dark ages and need to look at UltraEdit, Scite, E or any of the thousands of others that are out there. One of the things that all of these have in common is that they all support extensibility. Scite, for example, has a great page dedicated to different plug-ins called Scite Extras. There are extras there from various language formatting libraries to scripts that you can use. But what this proves is that even simple tasks like text editing can benefit greatly from being able to leverage a great platform so you should be thinking about that with your applications.

    Alan Stevens on VSX