New Network Model deployment




The D Day is here, the Go Live for our new Network Model. Okay perhaps not as dramatic as the actual D Day, nonetheless it is a battle to ensure successful system implementation whilst minimising business disruption. 

But first, what is this new Network Model that I mentioned?

The new Network Model
In simple terms, it is enhancement or an upgrade to the existing network model of Highways England’s motorway and all purpose trunk roads. The new model provides enhanced functionalities such as - true connected network, better diversion planning and event analysis, more accurate asset location referencing etc. Moreover, this deployment was particularly important as the new Network Model, in simple terms, forms the base layer or the foundation upon which other asset systems will be deployed upon later this year and next. 

So how did I ensure this deployment (and others like it) was delivered successfully?

Have a game plan
The first thing to note is that it is crucial to have a well-defined plan in advance. 

Here is an extract of checklist of things I had for go live:

  • What are the exact steps that will need to be performed for any needed data migrations or server changes?
  • How do we perform the deployment? (Make a checklist and give every team member a printed copy of it. Cross things off as you do them, noting who did them and what date and time they were done.)
  • How to back up any dependent applications/systems and data pre-deployment?
  • If something goes catastrophically wrong, what is the procedure to restore the original version or restart the deployment?
  • How do one reach the IT or support/help departments of all involved parties?
  • When do we reach our go/no-go point (i.e., the point when you either commit to finishing the deployment or start to roll back the changes), and what should the criteria be?
  • What are the post-deployment triage priorities? (This one is especially important, because there will likely be a lot of "squeaky wheels" begging for grease, and one needs to know which ones to prioritise first.)
  • How do we take things offline and bring them back online if needed?
  • How do one contact customers / system user to let them know things are okay or not okay?

You'll notice a lot of these items are not about "how do we do this?" but "what do we do if something goes wrong?" There's a reason for that, i.e. Murphy’s Law, which is that things can and will go wrong, and the quality of a go live depends just as much on what went right as how one handles the things that went wrong.

Be prepared physically and emotionally
Before heading into the last stretch before a go live, everyone needs a lot of sleep and rest because when things go wrong, everyone has to work long hours to sort them out. A team of folks already at their breaking point will not work well together, will have a hard time executing the basics, and will fall down flat when challenges arise. For team members with spouses, kids, etc., will have to take extra effort and need to get them on board too. There is no way I could have accomplished what I did on any of my go live adventures if my partner was calling me constantly begging me to come home or laying on a thick guilt trip about not seeing the kids over bank holiday weekend (when we chose to deploy, because operationally it was the most feasible time for the business).

Being flexible with the programme/project team
The team will likely work long, hard hours during the go live. It doesn't help anyone if you are a hard on the team about things like getting to the office on time, shaving, dressing up, etc. unless those things are absolutely necessary (such as on customer or management meeting days). For the go live I often went through, I often worked late and through weekends. If I had forced myself to go into the office early in the morning, I would have been too tired to be effective for the entire day. Instead, I let myself get the extra sleep, and then only went into the office if it made sense. Also, if folks are stuck at the office late, do buy them dinner to keep their spirt up.

Hope for the best, prepare for the worst
Many large organisations, especially critical mission deployer like NASA, builds triple redundancy into projects because of the fact that things will go wrong, no matter how much you plan. Unless you are working for a big enterprise, you probably can't do that. I had a Development environment (we called the the ‘Model Office’, a Testing/QA/Staging environment or two, and a Production environment. The best thing we did for our latest go live was import the data from Production into our Testing environment few weeks in advance, which let us catch a lot of issues early and test our data migration with the most realistic data set we could.

Remember we are all on the same side
As the pressure mounts during the go live, things often get tense, especially if your programme or projects involve first or second tier software developers/suppliers. It is easy for folks to start snapping at and working against each other instead of with each other. Finger-pointing will not solve the issues. You absolutely must not allow the pressure, lack of sleep, or other issues interfere with how you work with your teammates. When you are through it all, you'll be glad you kept your cool.

Celebrate Success
And once everything is over and dust has settled, don’t forget to celebrate with your team, to not only recharge those expended energy but also to boost the morale in preparation for the next wave of deployment.