Even demos need good quality code. We have found that being careful about automation and internal quality helped us to build a successful and robust demo application in a week. Focusing on high quality code and rigorous processes right from the start allowed us to develop more quickly than if we had dropped everything to work “quick and dirty”.
A common claim is that “quick and dirty” is effective in the short term because it’s easy to change, as in this graph:
Unfortunately, the graph lacks a scale to tell us when an investement in quality will pay off. It looks like a team can make a lot of progress before lack of quality starts to slow down progress. Our experience is that the curve starts to rise very quickly, possibly after just a couple of hours, and that we have to pay a lot of attention to quality if we want to keep going. This gives us a much more dramatic rise in the cost of change:
In the rest of this post, we describe how we delivered good software quickly by following practices to maintain internal quality. None of these practices are new or specific to our team—everyone can benefit from applying them.
A client was interested in understanding the possibilities of using Natural Language Conversation technology with their customers, so they asked us to build a demonstrator application. Our sales team would present an Amazon Alexa version in one week’s time, and a Google Assistant version the following week. We decided that the best approach would be to work just on the Alexa version first, and then port to Google Assistant. We had three developers on the team, one part-time.
The structure of the application was reasonably simple:
1) The user speaks to the voice activated device.
2) The device calls a HTTPS web-hook, provided by us, to give a response in the form of text for the device to speak. The request formats for these calls are provided by Amazon and Google.
3) We provide a dummy server that responds to this web-hook as if it were the client’s real back-end service. The web-hook calls this server to get and set application-specific data.
Our main challenges were the short time frame and that the application had to be robust enough not to break during the demonstration to the client.
This meant that we had to be rigorous about prioritising the most important features and about keeping the cost of change low. In practice, we adopted a lightweight version of eXtreme Programming (XP), adjusted to the needs of the situation—which is the approach that Beck and Andres recommend in “XP Explained”.
The following sections detail the practices which helped us succeed.
Choose the right tools
As with any software project choosing the right tools for the job will help you go faster. We decided to use Python3 throughout the stack as it proved ideal for our purpose. The language is very expressive, and is perfectly suited for producing quick prototypes. It provides lightweight libraries for providing a HTTPS server  , which was great for our cloud server. You can run a “Hello world!” HTTPS server with 5 lines of code.
Amazon provides the
ask-sdk library for Alexa in Python3, which helped us deal with the proprietary message format for communication between the device and the web-hook . Google provides a node.js library for their own message format, however due to the facts that we had a number of reusable assets by the time we started on this web-hook, we opted to keep using Python3 here as well. Implementing our own way of parsing and constructing the proprietary messages was straight-forward.
Start with a walking skeleton
Alastair Cockburn describes a walking skeleton as: “a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel.” 
In our case this was a simple “Hello world” conversation flow, where the user asked something of the device, which then would call our web-hook, which in turn would call the server. The server’s reply was then propagated back to the device. By first completing this interaction we confirmed that our proposed architecture was going to work – it validated our assumptions before we spent any effort that could potentially be wasted.
Iterative and incremental development
We started with a bare-bones, but functional application, a.k.a. the walking skeleton, which we then transformed into a polished demo through a series of iterations. This approach minimised risk, as we always had something to show from when we first finished the walking skeleton.
At the same time, we developed incrementally in that we would always finish one full feature for the demo before starting on the next one. This helped us minimise the time wasted on writing and maintaining code for incomplete features, which could not have been shown during the demo by the sales team and hence delivered no value. In the end, when time ran out, this left us with several high quality interactions rather than scores of incomplete ones.
Setting up continuous deployment as early as possible was key in allowing us to maximise our speed. Manually deploying components of the system, either through SSH or through web GUIs, is extremely expensive to be doing repeatedly. After having developed only the walking skeleton, we spent the first couple of days, almost half our total time, automating the deployment of all components.
Many hold, by intuition, that such a heavy investment in automation will not pay off over just one week. This position implicitly assumes that you will not be redeploying your components enough times during development to justify an investment in automation. We believe that this is an overly optimistic assumption. In our case, we deployed the lambda function 90 times, the server 72 times and the Alexa skill at least 20 times (that is an average of around 36 deployments a day). Of course, these numbers would be lower if we hadn’t automated, but then that would have meant larger and more difficult changes per deployment, implying a higher cost of change.
Automating processes helped us to understand them better by looking deeper into how they worked, and more importantly, how they should work. We had an issue with the python server running in EC2 being killed when ran automatically, meaning we had to run it manually via SSH. Through our search to automate our server deployments, we discovered that we needed to connect to the server using
nohup to prevent processes from being attached to our shell session on the EC2 server. Imagine requiring someone to be constantly supervising the server during the demo to make sure the process stays up, as an indirect consequence of thinking that it was acceptable to spend a few more seconds ssh-ing into your EC2 box to run the server for every deployment.
Having most manual tasks automated allowed us to focus on the important stuff: coding. It freed us from distractions, such as forgetting to manually re-deploy the server and then spending five minutes figuring out why changes aren’t working, when really they’re not being applied. Time is not only lost from chasing down such problems, but there is a hidden cost resulting from context switching. It is difficult to estimate the cost of distraction from development due to performing repetitive processes manually, however it can take 10-15 minutes on average to re-focus on the task at hand .
Another benefit to having a high degree of automation is that, combined with some documentation, any other developer can easily pick up the demo, should key members of the original team win the lottery . You also reduce the extent to which you are tied to a particular machine or set of user accounts.
 – Iqbal, S.T. and E. Horvitz, Disruption and recovery of computing tasks: Field study, analysis, and directions (2007)
 – http://wiki.c2.com/?TruckNumber
Two programmers working in parallel can write twice as many lines of code as one programmer, but lines of code is not what software engineers should aim to produce. The desired output of software development is working software, usually the fewer lines the better. We do believe that two programmers working together at the same machine can produce working software faster than two working in parallel on different machines. This is because the critical bottleneck in programming is brain power, not the speed of typing.
On our project, pairing ensured that the code contained the best ideas of the team, rather than an individual working alone without the perspective of a colleague. This is because pairing facilitates constant code review and healthy criticism from another member of the team.
From early on we tried to avoid having a single point of failure. By pairing, all of the team’s knowledge about the system was distributed quickly between developers, meaning that the progress of development was never dependent on any niche knowledge about the system held by only one team member.
Sometimes problems can be especially hard, even for two people. On occasion, all three of us would program together when navigating a particularly tricky stage of the implementation. This is mobbing, a practice where three or more developers work on the same machine. Mobbing has similar benefits to pairing, but with even more thought-power applied to the problem. We used this form of development in two cases. One, when it was critical that the whole team understood a certain aspect of the system, for example the walking skeleton. Two, when working on a particularly difficult and important bit of code such as designing and refactoring the central state machine controlling the flow of conversation.
Ports and adaptors/Hexagonal architecture
Ports and adaptors is a design technique for distributing and isolating the responsibilities of a system into small, loosely coupled components. The ports of a component are the interfaces over which it exposes its functionality. The adaptor is an object which translates a port so that is more appropriate for the connected component. As such, different components can communicate while remaining decoupled. Ports and adaptors is also often referred to as Hexagonal Architecture . Below is a popular illustration of the concept, showing business logic in the centre, interfacing with external services through adaptor objects communicating over ports.
For example, a system that fires off email notifications, reads a database and sends reports has three clear ports. The business logic of the application accesses these services via adaptor objects, which call the API of the port through which the underlying services can be accessed. Our application had two ports, one to the speech device via the lambda, and the other to the backend server over HTTPS.
Ports and adapters allowed us to isolate the known domains of the problem from the unknown and place these well understood sub-problems into their own hexagons. This narrowed down the scope of the core-logic within each single hexagon significantly, making the problem easier to reason about and hence much faster to solve. Consider the difference in effort between typing the line
self.server.get_balance(account_name), versus designing and writing out that function’s implementation inline.
Another benefit of Hexagonal architecture comes from the decoupling of objects and services. The Alexa and the Google Home applications required different adaptors to the speech device, as the way their interfaces drive conversations are different. However, the server was the same for both applications and so it and its adapter could be re-used without having to modify a single line of their code. This would not have been possible had our components been tightly coupled.
Using automated tests to drive development, even in small scale projects, is highly beneficial for maximising speed and quality. In this section we describe how this practice, TDD , has contributed to our success.
When a feature has a test and the test passes, we don’t need to continue to hold knowledge about the feature in our memory. Additionally, there is no need to worry about designing for the future, as problems can be taken care of at the time they occur, simply by writing some more tests and making them pass. Tests helped us to keep our working set small because at any time we only had to think about the feature we were currently working on, since we could trust that our tests would protect us from any regression. Any defects we find later can be patched with another test. Programming in the moment like this is both faster and much less effort.
Our test coverage was not exhaustive, but we wrote the tests first – before the feature they were intended to test. Writing a test before having defined the objects used in the main code means that you are free to choose, at the point of writing the test, which objects you want for representing entities in your program, how they are constructed and how their methods are invoked. Essentially, you are designing your program through writing tests and this is why developing in this way is said to be test driven. Writing failing tests is a design activity which drives the development of the program.
The first advantage of writing tests first is that a ports and adaptors design emerges organically, no extra care needs to be taken to design a decoupled system on the fly. Writing unit tests up-front makes a decoupled design blindingly obvious. When we want to check whether the
account_manager object calls the correct method from the
ServerAdaptor class, we need to make an instance of it available, forcing us to think about dependencies. When people write code without writing the tests first, this kind of thing may not be as obvious – it may seem appropriate to use a singleton instance of
ServerAdaptor; or to write some other easily avoidable anti-pattern.
Writing out a test to cover a new feature also forces you to consider the feature in the context of the current system. In the course of writing the test you get an idea of the effort that will be involved in making the test pass. Often, we would write out part of a test and then see some refactoring that was needed before we could finish writing it. After doing the refactoring, not only did the quality of the code improve, but the new feature became trivial to implement. We may never have realised this opportunity to refactor had we not written the tests first. How many of these refactorings would have been missed? How much would the consequent loss of speed have cost? For teams who don’t write tests first, these costs are being silently incurred all the time. For us, that could have meant losing too much time, delivering a less impressive demo and ultimately being out-competed.
 – https://martinfowler.com/bliki/TestDrivenDevelopment.html
 – For an in-depth explanation of TDD please refer to ‘Growing Object-Oriented Software, Guided by Tests’ by Steve Freeman and Nat Pryce
In this article we have presented how well known agile practices have enabled us to go fast and to create an impressive demo in a short time frame. We have demonstrated that they can apply not only to large, but also to small scale projects such as a demo application. We have covered a number of points, the most important of which are:
- Choosing tools that were fit for purpose and automating all of our deployments as early as possible.
- Iterating on a walking skeleton, one feature at a time, prioritising the most valuable feature first.
- Pairing and mobbing nearly all the time.
- Driving development towards a ports and adaptors architecture through automated tests.
To summarise the main benefits:
|Direct Benefit||Consequent Business benefits|
|Faster development speed||=> Greater achievable scope|
|Lower risk||=> More predictable outcome|
|Higher quality of software produced||=> Robust demo|
|Reusable result||=> Can be shared in the organisation, can be used by us for a real project|
We are not claiming that our approach is the best possible. We aimed to demonstrate that an investment into high quality pays off very early in a project. As such, the more traditional way of forgoing quality and automation in favour of producing a lot of code, quickly is actually a false economy and results in slower development even within the time frame of a week.
Our approach was not about achieving agile piety or cargo-cult conformance – but about delivering value. This successful business outcome is a result of high development speed and high external quality of the application, which both stem directly from high internal quality and an efficient development process.