Hello everybody, and welcome back to the second post in our blog series about SharePoint Enterprise Application Development. Our topic this time around is Build Engineering.
Build Engineering is the process of controlling what goes into a Build. A Build is composed of all the assets – in SharePoint usually solution packages and PowerShell scripts – that are needed to deploy or upgrade a custom application on a given farm.
I was recently on a project for a Fortune 50 company. This company had more than a dozen developers working on building a highly custom SharePoint intranet that served more than 100,000 users. Our first challenge was how to effectively have so many developers working together in the code base. The solution was to have a common source code repository such as Team Foundation Server (TFS) and have each developer have their own development environment. Each developer needs their own development environment so that they can run local deployments anytime they want, without interfering with the development efforts of any other team member. (More on SharePoint development and project collaboration is available)
Content vs. Code
Although code moves “forward” from Development environments to Integration to Test to Production, you may also have a need to move content “backward” from Production to the other environments. Bringing content into your Integration and Test and Development environments can be critical for meaningful testing. Note that the forward path for code is linear – code goes “through” Integration, then “through” Test, and then into Production. However, Content can move (often via Backup and Restore) directly from Production to each of the other environments, in a parallel (rather than sequential) manner. (See the graphic below on an actual proposed SharePoint implementation flow used at a previous client).
For this project, each developer had a SharePoint Server virtual machine – some hosted on the client’s infrastructure, some hosted at CloudShare. I also used SharePoint 2010 running on my Windows 7 desktop as a development environment (again, see our previous blog post on this subject). The key thing is for each developer to have their own dedicated development environment, and to use TFS to check in working code or shelve not-quite-ready code as needed. This allows the entire team to stay in sync in terms of the codebase, and also allows team members to easily share code that is not ready for check-in. Then we have three common environments: Integration, Test, and Production. This is where Build Engineering comes in. Build Engineering is the process of selecting assets from TFS and planning and developing scripts (that also get checked into TFS) that let you deploy those assets into the three farms (Integration, Test, and Production) in turn, but not simultaneously.
The idea around Build Engineering is that we create a deployment process that is executed on the Integration farm; validate it there; and, if the result passes our initial tests, repeat the process on the Test farm. Once it passes testing on Test, we use the same process to deploy to Production. This way, there are no surprises when we go to Production. If something fails along the way, we make changes and repeat the process from Integration onward. Nothing goes to Production that hasn’t been thoroughly tested in the Integration and Test farms.
By using TFS features like change sets and labels, and associating check-ins with work items, deployment engineers can get – and should make use of – granular details about what is in a deployment. On an every-other-week basis, we did a deployment into Integration. The project manager and I agreed on which change sets should be deployed in a given build cycle and we used TFS labels to define which change sets were in a given build.
As a result, we could:
- Explain each file deployed: what functionality was deployed, why it was deployed, and how to test it.
- Produce a report of the corresponding work items, which the testing team used as guidance on how to test the resulting application.
- Provide the additional deployment steps we needed to execute. We then wrote extensive PowerShell scripts to automate the deployment steps needed. This usually included deployment of the WSPs and manipulating the object model to configure the site as needed.
Our goal was to be able to hand off a package of files from the development team to a farm administrator who could deploy it with minimal pain and effort.
Additional levels of automated building and testing are possible depending on the level of sophistication of the development team, IT and project environments, and the cost versus benefit perceived.
Each deployment can benefit from accompanying documentation. The key elements of this documentation should answer the following questions:
- What needs to be deployed? What files are involved and how does the farm admin get them?
- How does it get deployed? Note that the tradeoff between the ease of deployment for the farm administrator and the time needed to automate the deployment steps will be visible here. These instructions should ideally be a small number of steps, which often includes running a script that deploys the solution package(s). The more the script does or, conversely, the fewer steps that require a human to configure the environment, the more reproducible and less fragile the result.
- Why does it need to be deployed? This enables you to tie the deployment effort back to project requirements and/or change requests.
- How do I know the deployment was successful? Note that this is validating the deployment and not necessarily the functionality deployed. Many organizations will have dedicated software testers that will test the overall application. However, the farm administrator needs some way of telling if the deployment went as expected or not.
- If the deployment goes awry, how does the farm administrator roll back to the previous version? (And no, avoiding this question is not a good idea.)
- Who is the contact person for questions about the deployment? This is usually the development team lead. The team lead’s contact information is provided so the farm administrator can contact that person if there is a question about the deployment.
One significant process change that can occur during the development and deployment lifecycle is the change from removing and reinstalling Solution Packages (WSPs) to in-place upgrades instead. Some of the challenges of upgrade deployments include needing to ensure that developers can perform both clean install deployments and upgrade deployments in their development environment and get the same resulting application. This is complicated by the fact that an upgrade does not process the solution package in the same way that a deployment does. An example of this would be a site column definition: changes cannot be handled by an upgrade without using a feature receiver or a post- deployment script. Because upgrade deployments impose significant development, deployment, and unit testing overhead on the development team, it is a good idea to delay the switch from clean installs to upgrades as late in the project as possible.
In conclusion, when engaging in a large-scale custom SharePoint development project, take the time to plan the build and deployment process early in the project. Poor developer communication and a failure to plan for deployment until late in the game will lead to project misery and rework.
Thanks to David Lozzi, Ralph Rivas, and Dan Sniderman for editorial review and contributions.