im_004426.jpg

What's new at MuleSoft?

If you read my last blog post on May's MuleSoft Summit, you'll remember that I told you I got a sneak peek at MuleSoft's roadmap.  

Well, that functionality has now been released.

On July 29th, MuleSoft's Crowd release went live. It seemed like a long wait, but it was finally time to put my hands on the new Anypoint Exchange 2.0 and the completely new component, Design Center. Now that I've had a few days to take a look around at all of the new functionality, I'm rounding up my highlights below. 

Anypoint Exchange 2.0

As soon as I logged in, I was impressed at how much has been changed. Exchange 2.0 is equipped with a brand new UI and fully integrated with Anypoint platform. And this is a major release; MuleSoft has included capabilities for easy publishing and consumption of API specs, advanced search, collaboration and commenting, and analytics.  

Let's start by talking about the new graphical layout; it's smoother, clearer and, in my opinion, all of the graphical elements on the page are much better organised. It's now easier to switch between the private assets only view and MuleSoft public assets view - and the new menu to filter the content also makes more sense now.

Entering a published asset, I can see than there's been big improvements on how to engage with users and/or consumers. There's a new section where a user can leave a review and assign stars (from 1 to 5) to demonstrate their level of satisfaction. But also new features like the 'share' option, to share the asset with another user within the same organisation, and the 'download as Mule' plugin, to download an API spec as a plugin to be imported directly in Anypoint Studio. The 'tags' section also deserves to be mentioned - it can be used to attach tags to the asset in order to improve the research by users.

The new interface also allows users to upload documents, like architecture blueprints or design documents for example. It's a much more intuitive experience which helps users find the right asset quickly or collaborate easily. Exchange 2.0 has also come up with incentive schemes to encourage collaboration, like the ability to set KPIs and measure the success of APIs by checking how many assets are created, how many of them are reused and its frequency and so on. 

Overall, I think this release is really geared up to increase collaboration between central IT and line of business. Building application networks should be easier than ever!

Exchange.jpg

Design Center

This is what I was really waiting for. The Design Center is the new component of Anypoint Platform which is comprised of a new version of API Designer, and also introduces the completely new Flow Designer. It means that designers and developers now have a unique place to design and build APIs.

I have to say that I didn't spend much time with the Flow Designer as I did with the new API Designer, but it was enough to say:

  • it's quite intuitive, user friendly and provides a guided flow design experience;
  • it's fast and reactive - only a few message processors and connectors are available;
  • it can replace Anypoint Studio for easy and medium complexity applications.
  • it can promote applications to different environments. 
  • it helps users to get the live view of input and output data as the flow is triggered. 

The only thing that hasn't really convinced me is the colours used; there's a massive use of greyscale in the user interface for the components and the popup windows. I'd prefer a more colourful UI - otherwise things seem to blend together a bit too much.

The new API Designer also had big improvements; first of all, a completely new skin in terms of user interface, with a new shelf on the bottom which has been better organised to host the "hints", and the new API Console incorporated on the right hand side. But that's not all. Two new functionalities have been added:

  1. Integration with Exchange: it's now possible to import and publish assets from/to Exchange directly in the API Designer;
  2. Code Versioning: in order to preserve the old versions of an API, "branches mechanism" has been introduced, which can be used to save old versions of API specifications.

API_Designer.jpg

So, to conclude, there will be a lot of fun for me in the next days exploring other new functionalities and trying to build an application using only the Design Center. But, hey, stay tuned. This is only the first part of Crowd release. By the end of Q3, a second part will released and we'll see which new functionalities and components will be available. Did somebody say API portals??

A day in the life of…a development intern

Author: Infomentum

Kicking off our new blog series, ‘A Day in the Life of…’ our summer interns, computer science students Pratik Jadhav and Hasan Rafiq, reveal a behind-the-scenes glimpse of an intern’s typical day.

I chose to study computer science because…

Pratik: My dad is in the IT business so it’s something I’ve been interested in from a young age. I love technology and I wanted to understand how everything works, and how to develop everything.

Hasan: Other members of my family are in IT so I always knew about it. At first, I was going to go into dentistry, but I tried it out and I couldn’t see myself looking into other people’s mouths for the rest of my life! So I knew then that I wanted to go into computer science.

The thing that most interests me in computer science is…

Pratik: Information security. It’s up and coming, and data protection is a massive issue in the market currently. I was really interested to learn more about Infomentum’s ISO certification, and to understand the policies they have to follow. I’ve heard about this at uni so it was interesting to see first-hand.

Hasan: Security as well! I also really like software development, like we’re doing during our internship here. I enjoy the creative elements of development, to come up with an idea and then conceptualise it. I’m really interested in working in development for a start-up in the future.

I decided to do an internship because…

Pratik: I wanted to understand what it feels like to work within a proper organisation in the technology business. I did an internship whilst I was at school, but it was for a food company working in their IT department. I was really looking for an internship at a company who has technology at the core.

Hasan: I’m in my first year at uni, and normally you wouldn’t do an internship until the second year. But I really wanted to check the industry and be sure development was the right area I want to go into. I like to test the waters before I go into something – dentistry being an example of that!

The project I’m working on is…

A day in the life of a development intern

Hasan: Pratik and I are working on a chatbot that provides information to anyone who is curious via Facebook Messenger. Basic information – not necessarily specific but generic information like address, links to a blog post etc. I’m working more on the blog subscription stuff, and Pratik is working on FAQs. With the blog subscriptions, people can specify when and what time of day they’d like to get alerts about a new blog post. If a new blog post comes out and the user hasn’t specified a time, they’ll get an alert from the bot the next time they log into Facebook.

Pratik: Exactly. It’s more to do with website information – so rather than the user having to search a website or search Google, the bot can respond with links to case studies, things like that. It’ll give the user all of the relevant information in one place rather than having to search for it. Using AI means that a member of staff doesn’t need to sit answering questions – the bot does it for them.

On a typical day…

Pratik: Usually we arrive and Amr (Gawish) takes us to the sprint board. Amr is our Project Manager for the chatbot, and has given us two week sprints. We have certain tasks to finish by the end of each sprint so that we can demo the product to the client for feedback and the next round of functionality. So, each day we talk about what we accomplished yesterday and any problems we had, and what we want to achieve today. He gives us advice on what he thinks we should work on next, and answers any questions we have.

Hasan: We plan our whole day at the sprint board, and then start working on the tasks allocated to us.

Pratik: We normally have an overall task which is split into smaller subtasks, so we’ll focus on those subtasks throughout the day. For instance, we had to get AI working with the Facebook chatbot and giving it small tasks – so instead of a person responding, the AI should. We had to get that integration working properly first. We’ll complete these subtasks and then expand them and add our own functionality.

Some surprising things I’ve learnt are…

Hasan: I knew nothing about agile before. I’d heard words like scrum, sprint etc, but I didn’t know anything about them. Lots of team members have sat with us to explain their role, so it’s been interesting to go around the business and understand what everyone does. I was also really surprised about the atmosphere in the Infomentum office – I wasn’t expecting it to be so open and friendly. It’s a different working environment than I thought it would be, in a good way!

Pratik: I’d heard of agile, but I’d never seen it in action. It’s good to see agile delivery first hand and see how it works in an organisation. I also enjoyed getting an insight on different roles in Infomentum and what they do day-to-day. We’ve heard about everything from pen testing, to marketing, to ISO.

If I wasn’t studying computer science, I’d study…

Hasan: I have no idea! Definitely not dentistry. I’d always liked computer science so I never thought about anything else.

Pratik: Medicine or economics. At school, I was very interested in becoming a doctor. But when I reached the last year of studying biology before uni, I realised I wasn’t very interested anymore, so I switched to economics. Then it was between economics and computer science, but since technology is always expanding, I was more interested in that.

My proudest achievement is…

Pratik: Not computer science related, but probably getting grade 7 on the piano! I still play the piano now.

When I leave the office…

Hasan: I play football quite a bit, and I also like video games. And sleep! I’m still adjusting to longer days than uni.

Pratik: I usually play cricket with some friends. I also help my younger brother with his work – he’s doing the 11+ at the moment. I have quite a long commute now, but I’d like to live in London in the future.

The evolution of test documentation; lessons learned from implementing Cucumber

Author: David Weston

The old way

I'm going to let you in on a secret. I learnt my trade in a traditional software testing consultancy - I'm talking a waterfall approach to test planning, test case definition and test scripting. Over the last 5 years at Infomentum, I've evolved and have worked hard to optimise Infomentum's testing practices to suit our agile development environment. 

Going back several years, Infomentum's Test Analysts would document detailed test scripts with extensive steps and expected results against each user story, and store as a test suite in a dedicated test management tool. At the time, that level of detail was useful. The problem was, we were finding that scripts were taking a long time to produce, quickly becoming outdated, and the effort required to maintain them was just not sustainable. As the volume of scripted tests increased, so did the time needed to execute a regression test; and that increased the time needed to release.

We dabbled with test automation, but at the time drew no connection between the tests that we'd automated and the tests that were executed manually, and the relevancy and coverage of the automated tests was not appreciated by the wider team.

Valuable Test Analyst time was being taken away from what mattered most; hands on testing with the product under test, identifying, raising and resolving issues, and ultimately decreasing risk. We needed a way to drastically shift our testing time in the sprint from a Documentation to Test Execution ratio of 4:1 to 1:4, with a more exploratory approach to testing - whilst continuing to satisfy our clients’ needs to understand our test coverage and see tangible test evidence.

The new way

Behaviour Driven Development (BDD) is becoming more and more popular amongst agile development teams. It's a methodology used to gain a common understanding of product requirement by describing the desired behaviours of the product. Cucumber is a tool to test the behaviour of your application, described in special language called Gherkin. Gherkin is a Business Readable, Domain Specific Language created for behaviour descriptions. It gives you the ability to remove logic details from behaviour tests. Gherkin serves two purposes: serving as your project’s documentation, and automated tests by defining as a sequence of Given, When or Then steps grouped as ‘Scenarios’ against a given product ‘Feature’, for example…

Feature: Refund item
  Scenario: Jeff returns a faulty microwave
    Given Jeff has bought a microwave for $100 on 2015-11-03
    And today is 2015-11-18
    And he has a receipt
    When he returns the microwave
    Then Jeff should be refunded $100

Before I go on, I should say that I'm aware that BDD is a technique for defining product requirements, and not specifically designed for being used as test cases. That said, at Infomentum, we've found scenarios to be an effective reference point during development and testing of a user story, to ensure that it's in line with the desired behaviours captured from the product owner.

Going through the process of collaboratively defining the scenarios really benefits from the analytical mindset of a tester, which is needed during test case design. The tester is in the best position to challenge with those ‘what if’ conditions that might not have been considered. The iterative approach to defining the scenarios allows for simultaneous test design and execution, as per the exploratory testing mandate, meaning the feature description and test documentation can evolve as one as each user story goes under test. Documentation is never out of date and always has a description of the desired behaviour of the product features at any moment in time.

To provide greater visibility and traceability, we load our features and scenarios into JIRA, linked to user stories. We use the issue status to track test execution progress. Watch this space for a future article on how we go about this - and how we utilise the JIRA API to integrate JIRA with our test automation framework, including dynamic generation of feature files.

The Infomentum test team use the scenarios as charters for exploratory testing, and utilise JIRA Capture to record output from our exploratory testing sessions. As we spend more time with the product, our understanding of it develops, which often leads to refinement of the scenarios in conjunction with the product owner.

But of course, we work in an agile environment. All modules are subject to continuous refactoring. That means it's not satisfactory to validate these behaviours as a one off exercise; the development team and the product owner need assurance that they won't break anything in the process of continuing to develop and re-factor product features. It's also not sustainable to run extensive manual regression tests. If you are not automating these tests, and are conducting the bulk of your regression tests manually, you are in fact introducing a form of technical debt, as the time required to test a release increases linearly as more functionality is added.

Cucumber JVM is our technology of choice for implementing these scenarios as automated tests, and for providing the development team with rapid feedback throughout development. Our Java based test automation framework utilises libraries like Selenium WebDriver to support our automation needs, and it's also heavily integrated with JIRA for recording test results. We'll expand on this in future articles.

Some lessons learned along the way

Here's some key things that we've learned from working with BDD and Cucumber over the last couple of years, and implementing scenarios as automated tests in a JAVA framework. All of our recommendations are concerned with keeping your feature files and code more succinct and straightforward to maintain:

  • Define conventions for expressing steps  – Avoid having multiple phrases for performing the same action or verification for example;
When I press the Search button
When I click on Search
When I click on the Search button

The meaning of all the above phrases is the same, though for Cucumber they are interpreted differently. By defining a set of rules and being consistent, your automation code will be a lot more manageable. Initially this requires a lot of reviews and refactoring, but after some time it becomes a habit - similar to the way developers adopt coding conventions to make sure that they can understand each other’s code. Try to reuse text instructions as much as possible – the more similar step definitions you have, the more difficult it will be to maintain the code base. 

  • Use parameters in your definitions – Reusing text instructions is powerful, as Cucumber allows you to vary some part of your step instructions. The step definition method in the code is bound to a step instruction in feature files using regular expression with back-linked variables. E.g. for instructions like:
When I add 4 more bananas
When I add 3 more apples

…we don't need to bind both instructions with separate step definition methods, we can bind both to the following regular expression:

// Calculator addition
@When("I add (.*) more (.*)$")
public void addAnAmmount(String numberToAdd, String subject) throws Exception {
    calculator.performAddition(numberToAdd);
    System.out.println(subject);
}

In Cucumber the text in quotes will be passed as parameters, which can then be used in an automated test function you define. This can be used to drastically decrease the number of step definition methods required.

  • Use an appropriate level of detail in scenario steps and create modular building blocks that can be collated together – restrict the steps in the scenario to what is actually being tested in any given scenario, to avoid your scenarios becoming too long. A simple example is a login. When writing a scenario to test the login, you might describe it as;
When I enter the username “fredbloggs”
And I enter the password “password”
And I click the login button
Then I am logged in as “fredbloggs”

For all subsequent scenarios that are testing other areas this could be simply described as;

Given I am logged in as “fredbloggs”
When I …

In this example, we put instructions from the first test and wrapped them with the single instruction. But the main idea is that if we don't care about how we do some actions, and we just need the result, then we can use more high-level instructions.

By building our step definitions in this way, we make a layer of reusable functions which represent some domain layer. These functions can be grouped together like building blocks to perform a more complex function.

  • Focus on behaviours not implementation - Scenario steps should be a description of system and user behaviours, and in most cases not describe each and every click required for the user to achieve that goal. Try to avoid coupling your scenario descriptions too tightly to the way that a user story has been implemented - otherwise the description of the desired behaviour will be impacted every time this is tweaked.

For example, define a step as;

When I add 2 pencils to my basket

And not;

When I click on Pencil
And I enter 2 in the quantity field
And I press the “Add to basket” button
  • Define a structure that is maintainable – if you keep all of your detailed automation code in a single step definitions class, you will quickly find that you are repeating a lot of code, and it becomes difficult to maintain - especially where you are using the same Selenium functions over and over again. Ideally you'd have a separate definitions file for each piece of functionality that you're testing. However, since we need to define our Webdriver, and reference the same driver through every step, all of our definitions need to have access to this driver.

Our structure is therefore to have a single step definitions file with a package of action classes split by different areas of functionality that we are testing within the application (e.g. login, car search). All of the complex automation logic resides in these action classes, and is split into logical functions which can be reused across multiple step definitions. Each step definition function is extremely lightweight, and simply acts as a binding layer between the feature runner, and some functions within the action classes. A utilities package is home to classes containing common functions required across all/multiple action classes, such as Selenium WebDriver, for which there's a class containing all the required functions. So that a single scenario can utilise functions across multiple action classes, and the same instance of Webdriver be used, the Webdriver class is instantiated from the StepDefinitions file and shared with each action class. That means they all have access to the suite of functions, and the same browser instance can remain open as the scenario moves between different areas of functionality. We have a number of other common utility classes for querying databases and calling web services for example.

structure.png

We've found that this structure is the best way to keep your code functions modular, and to avoid duplicated code. (Ref. https://www.coveros.com/819/)

  • Use Cucumber for non-functional tests – Cucumber is clearly well suited for verifying functional behaviours, but we've also experimented with validating desired non-functional behaviours. A simple example is to capture the time taken for a search function to return results to the interface and assert that it responds within the NFR upon each execution. We've also used Cucumber and our framework in accessibility, security and cosmetic comparison tests - but that's another blog post.
  • Make your @Then verifications scratch beneath the surface – working within an open JAVA framework is useful in the freedom that it creates. Most of our automated Cucumber tests are in part at least browser based, but not all. We also interact with web services, APIs and databases. This is particularly useful in the @Then steps to verify the outcome of the scenario beyond what is shown in the front-end. For example, we might check the order status has been updated correctly in the database, or that an exposed API is returning the correct order status, to be completely satisfied with the scenario end to end.
  • Use scenario outline and examples to increase test coverage – Cucumber has a useful feature called Scenario outline which can drastically increase your test coverage for similar tests with different combinations of inputs. In the scenario outline steps, you can replace fixed values with variable <placeholders> and then populate a table of “Examples” beneath the outline. The table contains a set of values for each variant in each row of the table. The more rows we add, the more cases will be checked. The scenario outline will be executed for each row in the table, and each run uses data from the appropriate row. This can be a more succinct way of defining a group of very similar scenarios, e.g. scrolling a carousel, pagination, search filters etc.
Scenario: eat 5 out of 12
  Given there are 12 cucumbers
  When I eat 5 cucumbers
  Then I should have 7 cucumbers
 
Scenario: eat 5 out of 20
  Given there are 20 cucumbers
  When I eat 5 cucumbers
  Then I should have 15 cucumbers

Scenario outlines allow us to more concisely express these examples through the use of a template with placeholders, using Scenario Outline, Examples with tables and < > delimited parameters:

Scenario Outline: eating
  Given there are <start> cucumbers
  When I eat <eat> cucumbers
  Then I should have <left> cucumbers
 
  Examples:
 | start | eat | left |
 | 12 | 5 | 7 |
 | 20 | 5 | 15 |
  • Use background steps to avoid unnecessary repetitive steps in your scenarios – where all scenarios for a given feature have a common set of steps, it's not necessary to repeat those steps in each scenario; they can be defined at the feature level. These common steps will then be executed before any child scenario is executed, and the scenario steps will take over from that point. This is useful for navigating the user from the default home screen, to the appropriate location for the context of the feature, and logging in if necessary.
Feature: End user should be able to see the news and views listing page
 
Background:
    Given I am on the homepage
    And navigate to the News and views homepage
 
 
Scenario Outline: Using the filter on the news page
    When I use the Regions & countries filter to select only "<regions>"
    And select "<type>" in the Type filter
    Then I can see a reduced number of articles
 
    Examples:
    |regions|type|
    |South of England|News|
  • Do not use Cucumber for white box unit testing – Cucumber tests are written in business language to define the end to end behaviours of a system. Unit testing is done by developers for developers and are designed to validate a single code function. Readable tests are not an advantage here, as developers understand the code and purpose of each unit test well. 

In summary

Cucumber has provided a way to increase the visibility of our tests; it expresses them in a language consistent with the wider team, and the client, creating a shared understanding about what is being delivered and tested. By using Cucumber, we've been able to meet our objectives of dramatically increasing time spent in test execution, and found an efficient way of documenting our testing and ensuring it's up to date.

Look out for my next article on how we took our Cucumber integration to the next level with Cucumber JVM automation and integration with JIRA.

The uprising of Oracle Intelligent Bot!

Author: Amr Gawish

Hands-on session

Recently, I was one of the lucky few that were handpicked to attend Oracle's partner training on Intelligent Bot in London - and that meant I got hands-on experience with the product and witnessed its glory in action! The product is called Intelligent Bot and it would be available as part of the Mobile Cloud Service (MCS) suite as it complements other features of that product - not to mention that it fits the product the most. The aim is to provide an easy way to create Chatbot applications in simple steps, and provide options for Intent Recognition - i.e. understanding what the Chatbot's end user means - and custom entities to be able to take actions and drive the conversation towards whatever developers see fit.

The hands on sessions were given by two Oracle Gurus, Grant Ronald and Frank Nimphius, in which they discussed the different aspects of the development process and how to fit everything together. The hands on experience was on point, and I personally found no problem navigating through the product and understanding it.

Oracle's vision is to put the power in your hands, and give you the flexibility to control how do you want to orchestrate the bot behaviour. That's why it fits so well with MCS - since other MCS functionalities give you all the tools you need to get ready in a short space of time - and provides even greater means for security and extendability.

While the product has great features and a lot of flexibility, it's still in its infancy, and requires a lot of improvements in certain areas. That said, with the pace that the product is evolving, I'm betting that this product is going to grow into a good thing in little to no time.

Overall, it was a great training, and it was nice to see that Oracle is providing this hands-on training to help its partners move forward with various technologies.

The team dinner

The team enjoying a well-deserved bite to eat after the training!

Nightmare on time sheet; how we overhauled the time logging process (part 3)

Author: Becky Burks

Wait: you need to read part 1 and part 2 first!

The fun side of cultural change

You’ll remember that last week I discussed the crucial need for a cultural change in how we dealt with time logging. After the CX team’s thorough research, we were ready to take the findings – and our ideas – to our board of directors. The first thing we showed them was our low-hanging fruit - a solution to the problem of people not remembering to regularly log their time. One of our in-house designers had created some fun posters to stick up around the office. A while back, we had an internal photoshoot as part of our rebrand, and we used our peer’s funny photos in the posters. With their permission of course!

David time logging.jpgFarhan time logging.jpg

 

 

 

 

 

 

 

 

 

 

 

 

Caroline time logging.jpgMarta time logging.jpg

 

 

 

 

 

 

 

 

 

 

 

 

Matteo time logging.jpgNelson time logging.jpg

 

 

 

 

 

 

                

We had approval to put them up straight away. This was a great start. We didn’t need to have a new system to start implementing this kind of cultural change. Everyone found the posters fun and entertaining, and we still have them around the office today.

Changing the system

Then came the more challenging part - justifying a change to our internal systems. We approached this in a structured format, leaving the best for last. First, we talked through all of the analysis we had done in the exploration phase, to prove why we needed a new system and a new time logging process. Then, we went into the details of how we selected the two potential options, along with the benefits that each would bring, and the cost implications.

We introduced both systems to the directors in a similar way to what we’d done for developers, and went into demo mode of each, answering the directors’ questions as we went along. At this point, it was clear that the room was leaning towards System A, just as the operations team had done when they saw the systems. System A was easy to manage and made it easy to get reports. The interface was also very similar to our existing time logging system, so the change needed was minimal.

Then, we showed them the results of the survey we had done at the end of the developers’ workshops. It turned out that pain points were lessened by more than half by System B compared to System A. Developers thought that System B would foster daily time logging from them as opposed to weekly with System A. All but one person thought their time logging would be more accurate with System B. On the final question about which system they would choose if it was up to them, all but one answered System B. The results were astounding. It was not what had been preferred from a management perspective at all, and without this feedback it would not have been the selected system. Of course, this information changed the game, and the directors immediately agreed the answer was to implement System B. The operations team and managers could adjust to a new way of working; the most important thing was to have a system that the developers actually wanted to use.

At the next company meeting, we announced the good news. Shortly after, we rolled out our chosen system: Tempo Timesheets (a JIRA add-on). We provided training to all of our different user groups.

Since then, nobody has looked back. This is not to say that everything is perfect, and as with any rollout, the cycle of continuous improvement should always follow. However, whenever people are asked in feedback sessions if they prefer the new system to the old, they chuckle and immediately agree it’s miles better.

Lessons learned

There’s really only one moral of this story, that really must always be top priority for any implementation; a system is useless if your users won’t use it. Listen to your users, have a plan for how to manage the change and approach it openly and honestly involving different areas of the business, and of course, make sure that what you deliver is something they genuinely want to use.

Nightmare on time sheet; how we overhauled the time logging process (part 2)

Author: Becky Burks

Hint: you'll need to read part 1 first for this post to make sense.

Phase 2: Imagine

The CX team got together to discuss the problems and carry out an internal ideation workshop to tackle them. We knew that we had 3 key tasks:

  • Improve people’s knowledge of why time logging is important
  • Create a culture of time logging as a habit; and
  • Make the systems they needed to use simpler and clearer while having less of them!

Knowing that there was no room for improvement with our existing official timesheet system, we started to look at the market. We were looking for something that would integrate with JIRA, where our developers work on a daily basis, so that time logging could become a seamless process for them. We narrowed the list of contenders down and chose the two most promising to fully evaluate. We needed to evaluate each to check that it could cover all of the functionality we already had; and more importantly, that both operations and the developers would want to work on.

After a few weeks of testing both systems and evaluating them against a requirements check-list, we showed them both to the operations team. Overall, they liked both systems, but preferred System A over System B. However, we all knew the real test was to see which one the developers would use...

Two workshops were held simultaneously with two different groups of developers. One group saw System A first, where we ran a quick presentation about how it works and then let them play around logging time. While that group was looking at System A, the other group was doing the same for System B. Halfway through the sessions we swapped each team over, so they were evaluating the other system.

This approach let each team play with both systems, but in different orders and without speaking to each other. The idea was to eliminate any bias people might have after speaking to their peers in a different group, or bias towards the first or last system they saw.

At the end of the workshops, we asked them all to complete a survey before leaving the room. The survey questions were created to gauge how the systems would help them with the problems we had previously identified:

  • How regularly do you think you would log your time using System A and then System B?
  • Which pain points are lessened by logging time in System A and then System B?
  • Are you likely to spend less time in System A or B than you do currently?
  • Which system will help you to be more accurate?
  • What advantages do you think each will bring you?
  • If you were in charge of choosing a system, which would you choose and why?

The feedback was nearly unanimous. But it wasn’t quite time to make a final decision yet. We also needed to go through a cultural change…

Telling the story: why is time logging so important anyway?

The next stage for us in our time logging journey was to map out all of the pain points we had identified, and group them in an order which told a story of the problem:

Story of the problem.png

Then, it was time to communicate this to the masses. The arena? The next company meeting.

We used the company meeting as our golden opportunity to talk openly about our time logging woes. We presented the view of the problem from different people’s perspectives, and Dan took the time to explain why we actually need to log time, and how it helps us all.

Time logging.png

At this point, we had output from our workshops. We had the devlopers' feedback. We even had solid ideas for the way forward. Now, it was time to go to the directors to get their approval for implementation…

Check out next week’s blog to find out how we further managed cultural change and finally untangled our time logging troubles!

Nightmare on time sheet; how we overhauled the time logging process (part 1)

Author: Becky Burks

It had been coming for a long time…

At the end of every month, developers scrambled to complete their timesheets, while the operations team were breathing down their necks, threatening them with sticks to get it done - and done right. The developers were left wondering why operations wanted their time logs so desperately. Surely they already knew what they had been working on? The operations team were scratching their heads as to why the developers didn’t do it; they had been reminded at the end of every month like clockwork, yet every month the same story. It was hugely frustrating for both sides, and was approaching boiling point. As soon as the Customer Experience team had a break between projects, we knew this was a top internal issue that had to be solved.

Explore - the operations perspective

We knew we needed to start by getting to the root cause of the problem. So, we set up workshops with the different stakeholders, including operations and developers. They were set up in a way to not mix audiences, so that people would be free to express themselves without someone from a different side of the argument being present. 

The operations team is the right size for us to run the session ‘interview style’, with carefully prepared questions. From a CX Analyst point of view, it was an eye opener. We were so used to being involved in internal project teams, that we hadn’t considered the operations point of view. As we interviewed the team, we drew out a reflection of the time logging process as they saw it from their perspective.

We learned that time logs expected from developers were actually for the company to be able to calculate its profitability, and have a view of the staff utilisation. The operations team had to do a bunch of reports on these topics at the end of every month for the directors to be able to make informed decisions, and for the company to know where it should focus improvement efforts. As the CX team, this was the first time that we fully understood what the time logs were actually used for, and why they were so important. Moving forward into the developer sessions, we were keen to find out if anyone else was also in the dark about this too.

Explore - the developers perspective

Next, we held a developers workshop. We made sure to invite specifically selected people (not that they were aware at the time!) to ensure we were covering everyone’s points of view; from testers, front and back-end developers, support developers, as well as those who were the best and worst at logging their time. We made clear to them at the beginning of the session that the aim of the workshop was to discover their pain points and motivations for logging time, as well as to gather their ideas on how we can improve the process.

But first, we had to break the ice. To start with something fun, as well as get everyone in the open and honest mind-set we needed for the real topic, we played the draw toast video from Ted Talks. Then, of course, we made them all draw their process for making toast.

While it seemed like a pretty useless exercise from their point of view (if entertaining!) it really achieved what we wanted - to show that everyone has a different point of view. It made the transition to the follow-on topic easy. Naturally, the next process we wanted them to draw was their process for logging time, however they each saw it.

When they were done, they each pinned it on the wall and explained it to the rest of the group.

As they each spoke, it became obvious that there were some common themes for everyone, no matter which member of the team they were. Everyone had a step in their process about confusion or panic. As they spoke, we also wrote down quotes from them and stuck them up with their process to get the full, annotated picture.

     

At the end of the session, our questions were answered. What motivated people to log their time? They were told to. All stick, no carrot. What was their experience of the time logging process? Confusion, frustration, panic and stress. They weren’t always sure which system to log time in, which tasks to log their time on, or even who to ask. They worried about the consequences of accidentally missing a day and having to go back to log it. Worse, because they didn’t understand its purpose, it gave them the unsettling “big brother is watching” feeling.

Around the same time, the CX team attended a fantastic Design Thinking course by Julia Goga-Cooke to further hone our skills. We picked up the useful technique of considering the Design Thinking process as a cycle of Explore, Imagine, Telling the Story and Implement. This was just the process we needed to use for the time logging project. We knew we were still in the exploration stage, and used the opportunity to delve deeper into this phase.

 

Explore - management perspective 

We held an interview with our COO, Dan Shepherd, to get his take on the time logging problem. Overall, Dan of course knew why time logging was important. He also had a good grasp on why it was frustrating for people on both sides. We shared with him some of the feedback we had gathered so far, and he was surprised to hear that the problem seemed to be more culturally ingrained then he thought. He knew that we needed to get better at internal training and communication, but he still wasn’t sure why people didn’t ask which system to log time on.

 

The next steps

As the final step, we created a company-wide survey for everyone to have the opportunity to put forward their ideas and feedback. We also wanted to use it to put some quantitative data behind the problem. In summary we found out that:

  • We were using six different systems for time logging;
  • Most people said they logged time in the official timesheet system closer to the end of the month rather than daily;
  • However most developers consistently logged time in JIRA for each task they completed;
  • Most people thought the most important reason for logging time was billing;
  • The biggest pain point was that people couldn’t remember what they had done, the second biggest that they had to log in multiple systems, and the third that they were not clear which tasks to log time against.

The last question asked everyone for their ideas on how the process could be improved. The most common answer by far was to have one system to log time in. The next was making it simple enough for staff to do it daily.

So how did we tackle the problem? Check out next week’s blog post for the next episode of our time-logging saga…

The bullseye (not bored) scrum board

Author: Neil Clark

I’ve wanted to write something about Infomentum's scrum board since its inception in late 2012. Now seemed as good a time as any to share the story...

During an end of sprint retrospective, one of our developers Jakub said “I'm bored of the board.” That didn’t surprise me. We'd been doing two week sprints since much earlier that year, and after rough calculations, I realised that the team had been standing next to the standard scrumboard for about 45 hours...not all in one go, obviously. That was 20 sprints * 9 days per sprint * 15 minutes per stand up; i.e. a lot of time standing in front of a boring board. 

It was good timing really. We'd just done a major deployment, so were in a period of warranty and bug fixing - and that meant I had a bit more time on my hands. I went Googling, and found Craig Strong’s super hero scrum board.

It was the inspiration that gave me the following:

Bullseye board 1.0 

Original bullseye board 1.0

This was very much a first draft. The board is looking pretty different these days (I'll come to that in a moment), but the techniques of how we use the board hasn't changed.

Board structure

The main point of the board is that it focuses the eye on the sprint goal and the mascot in the middle. That's where we’re aiming. That's where we want to get all of our user stories to, so that we can achieve the sprint goal and appease the mascot (who happens to be Dominick the Christmas donkey in the above picture). We (try to) choose or create a funny mascot for our sprints which is loosely connected to the goal. 

The board is divided into segments, each representing one day in a sprint. At the time, we were doing two week sprints, with one day written off for the review, retrospective and planning.

On the left of the circle, you can see the area where we put out 'To-Do' user stories. We only put user stories, not tasks, on the board; we use Jira, so sub-tasks are managed there. I think the board gets too messy if you try to include a card for each sub-task as well. We also have high level technical tasks in our sprints (e.g. Set up UAT environment), which are chunky enough to be on the board as well. 

The outer red circle is for 'In Progress' user stories - these are stories that developers are working on, but haven't been passed to the tester on the project yet. On the first full day of the sprint (usually a Tuesday) we have a different type of daily stand up, where the whole team discuss a rough plan of when the various stories will be available to the tester so they can properly get their hands on it and start finding bugs, e.g. a developer starts a story and based on the tasks, and their workload, they believe it will be ready for testing on Monday of week 2 - so they place it in that segment.

The idea behind this is to create an even distribution of stories across the whole sprint. It allows the tester to test throughout the sprint, and avoid cramming all of the testing into the last day or two. If this happens, testers won't be given the time they need to make sure the definition of done is met. If/when they find bugs, fixes for those bugs might not be possible in the sprint, leading to technical debt or a bad sprint review. 

Highlighting this scenario is the first tangible benefit of this board; it highlights when your sprints are becoming mini waterfall phases. If at any point during your sprint the majority of the story cards are in the final two segments of the board, something has/is going wrong. And with this board there’s no getting away from it. Everyone can see it.

The ensuing discussions have brought out several root causes across projects I’ve worked on:

  • Dependencies on a 3rd party providers (incorrect documentation, lack of access to environments, randomly turning things off) meaning development took longer than expected and held up other work
  • Large user stories that had too many areas of complex functionality within them; they needed to be sliced so they could be more easily understood, developed and tested
  • A developer hanging onto their work for too long until they perceived it to be perfect before releasing it to test; this was due to their misplaced sense of ownership
  • Bottlenecks on certain team members or skill sets
  • Inter-dependencies on tasks and stories

During the daily scrum, the team then reassess relevant stories as they discuss what they’ve been working on. The physical action of having to move a story further round the circle forces a discussion about why and what the rest of the team can do to help remove the impediments that may be blocking the developer(s).

The next circle is In Test. The developer moves a story there once they’re happy their tasks are complete. The tester then moves it round to the section representing the day that they think they’ll be able to finish testing.

Once the tester is happy the story meets the definition of done, they place it in the bullseye. 

Bullseye board 2.0

Since the original attempt, the board has evolved and sharpened up it's look. Firstly, Alecia and Erin used a simple string and blu tack technique to make a much neater circle, dividing it into something approaching equal segments. We also used South Park avatar to create a likeness of each other. The rest of the team were encouraged to follow suit after I had created my own - highlighting my lack of hair and slightly angry face. These avatars are used to show who is working on which story, and also just to have a bit of a laugh with each other. It may sound like forced fun when agencies say “we’re cool and relaxed, we have an Xbox in the office”, but it just creates a bit of informality. It’s a talking point. It starts a conversation, and that for me is the most important thing; people interacting. People from outside the team often want an explanation as well!

The finale: bullseye board 3.0

The next significant version (also our current version) was created when we moved to our new office on Jewry Street in April 2015. Most of the walls are painted with whiteboard paint, so it was the perfect opportunity to get one of our designers to work their magic on creating our on-brand bullseye board. We had it printed as a wall sticker on the whiteboard walls:

Bullseye scrum board 3.0

But why the bullseye board?

For me the board has the following benefits:

  • Focuses on the goal and the mascot
  • Represents the cyclical nature of Scrum
  • Highlights when sprints are just becoming mini waterfall phases (all testing at the end) and helps us to remain agile
  • Highlights when stories are continually slipping round the board, so that we can address and remove impediments more quickly
  • Because of all the room around the board thanks to our whiteboard walls, it becomes an informal team noticeboard where anything can be shared
  • The use of avatars allows people to quickly see who’s working on what
  • Creates a talking point for those in and out of the project.

All in all, it's been a great success for us - that's why it's still going strong after 5 years.

Drop a comment below to let me know if you have created or seen any alternative scrum boards. 

Microservices: everything you need to know (part 4)

Author: Matteo Formica

In the previous post, microservices part 3, we mentioned there can be different approaches to building a microservices architecture. In this final post in the current series on microservices, I’ll be looking deeper into those. Read on:

Request-Response (Synchronous) and Service Discovery

In the examples shown so far in this series, the pattern used is request-response; the services communicate with each other directly by their public HTTP REST APIs. The APIs are formally defined using languages like RAML or Swagger, which are considered the de-facto standard of microservice interface definition and publication.

This pattern is usually adopted in combination with a component called Service Discovery:

Why do we need it? Remember that we are in a distributed context, where the network conditions can change quite frequently, and the services can have dynamically assigned network locations. So, the services need to know how to find each other at all times. A service discovery tool allows to abstract away the physical location of where the services are deployed from the clients consuming them.

When a service is started or shut down, it registers/deregisters itself to the service discovery tool, communicating that it is alive and what is its current location. It also queries the address of all its dependencies, which it needs to call in order to perform its task.

Examples of Service Discovery tools are Spring Netflix Eureka and Consul.

Event-driven (Asynchronous)

In those cases where microservices collaborate for the realisation of a complex business transaction or process, an event-driven approach is also adopted, which totally decouples the services from each other.

This means that the services don’t need to expose a public API anymore (unless we use a combined approach), as they entirely communicate with each other via events. This is possible only by introducing a new component in the architecture called Message Broker:

The message broker is responsible for delivering messages from producers to consumers running on respective microservices. The key point of the message broker is high availability and reliability; it guarantees that the messages are delivered to the respective consumers, in a reliable fashion. If a consumer is down, messages will be delivered when it comes back online.

Message brokers also provide features such as caching and load balancing. Being asynchronous by nature, they’re easily scalable. Standards like JMS and AMQP are dominant in major broker technologies in the industry.

This component enforces the pattern of choreography; the services collaborate in a choreography asynchronously by firing business events, published to the message broker. No further complex orchestration or transformation takes place, as the complexity of the business logic lies inside the actual microservices.

One of the most popular technologies used for Message Broker at the moment is Apache Kafka.

Composite (Hybrid)

Of course, nothing is preventing us from mixing the two approaches; the composition of microservices is realised with a mix of direct calls through HTTP and indirect calls through a message broker.

Microservices Technologies

As you can imagine, we’re seeing an explosion technologies you can use to implement microservices at the moment. Which one to use really depends on which language, framework or capabilities we expect to use - and this may depend, in turn, on the skills we have in our team, existing products licenses we already have, and so on.

As a general principle, any language or framework which allows us to expose a REST interface, or is able to use messaging protocols (e.g. JMS) is a candidate for implementing a microservice. Remember, one of the main points of adopting this kind of architecture is that technology choices don’t really impact the overall system, so we have total freedom to choose whatever is best for the purpose.

To mention some of the popular microservices oriented frameworks, you may opt for Java ones (Spring Boot & Spring Cloud, Dropwizard, Jersey - the open-source reference implementation of JAX-RS), Node.JS (Express, Sails), Scala (Akka, Play, Spray), Python (Flask, Tornado) and many more.

This is not meant to be an exhaustive list at all, there are countless options you can choose from.

What about the distribution of our microservices? Where are we supposed to deploy them, and how are going to manage them (especially where they start growing in number)?

To answer this question we need to introduce the concepts of Application Container, Container Orchestrator and cloud-based Managed Container Service.

Application Containers

Application Containers are a big topic now, and it’s becoming the preferred way to distribute microservices. I don’t want to go too deep into this topic - you can find plenty of information about how containers work, what are the differences and advantages when compared to the traditional physical/virtual machines. By far, the most popular technology for containers today is Docker, and here you can find the full explanation about what a container is.

All you need to know at this stage is that a container consists of the application plus the bare minimum necessary to execute and support it. A container is meant to be portable across different physical hosts, virtual machines and cloud providers, and across environments; you should be able to run your container in your laptop, on DEV environment or in Production exactly the same way. The only external dependency of a container is the technology needed to run the containers themselves.

Usually the container runs a very lightweight Linux distribution (like TinyCore, CoreOS, Alpine Linux, etc), containing only the bare essential OS libraries the application needs to run.

If you have a look at the adjectives describing a container (lightweight, isolated, portable, etc.) you may understand why this is a perfect match for distributing microservices!

Container Orchestrators

Usually the containers are used in combination with Container Management technologies, also known as Container Orchestrators or Schedulers.

Remember that microservices are meant to be deployed as distributed applications; this means we need to take care of things like high availability, clustering, and load balancing, scaling service instances on the fly, rolling upgrades and taking care of the dependencies and constraints, etc. Luckily for us, this is exactly what these products take care of.

Among the most popular technologies at the moment we can find Kubernetes, Apache Mesos or Docker Swarm).

Managed container services

If you don’t want to worry too much about the underlying infrastructure, you may opt for a managed container service, delegating the operations above to a cloud provider.

All of the main vendors now provide cloud platforms which use all (or many of) the technologies mentioned above in a transparent way for the end user. To mention some of those; Oracle Application Container Cloud Service, Amazon’s  AWS Elastic Beanstalk, Google App Engine and Microsoft’s Azure App Service.

In a nutshell, via these platforms we can upload our microservices (in a packaged format, like JAR, WAR or ZIP), specify a very basic configuration (like the command needed to execute the service, the environment variables needed for the application to run, ports to open, etc.) and then, behind the scenes, the platform provisions a new container and deploys the application on it. After the container is started, the full lifecycle of our distributed application can be managed via the platform (load balancing, scaling, starting and stopping the containers, etc).

Conclusion

We’ve finally reached the end of this series!

I tried to give a 360 degree view of this topic without getting too much into the details, which was not really the point of this series.

I’m sure I’ll be back in the future with more microservices related posts, so make sure you subscribe for updates – otherwise you might miss it!

Microservices: everything you need to know (part 3)

Author: Matteo Formica

Wait! Have you read part 1 and part 2? You’ll need to cover those out before reading on.

How to decompose a monolith

When it comes to microservices, the million dollar question is: “How do I decompose my monolith into microservices?”. Well, as you can imagine this can be done in many ways, and here I’ll be suggesting some guidelines.

The first step of course is the design. We need to establish our service granularity, then decompose our domain into exclusive contexts, each of them encapsulating the business rules and the data logic associated with that part of the business domain. The architect will be responsible of defining the service boundaries - and this is not an easy task. I’d say that decomposing a business domain is an art, rather than a science. On the other hand, in a monolith, it’s not always clear where a service ends and another one starts, as the interfaces between modules are not well defined (there is no need for this).

To identify the microservices we need to build, and understand the scope of their responsibility, listen to the nouns used in the business cases. For example, in e-Commerce applications we may have nouns like Cart, Customer, Review, etc. These are an indication of a core business domain, hence they make good candidates to become microservices. The verbs used in the business cases (e.g. Search, Checkout) highlight actions, so they are indications of the potential operations exposed from a microservice.

Consider also the data cohesion when decomposing a business problem. If you find data types that are not related to one another, they probably belong to different services.

In a real life scenario, if the monolith uses a centralised shared storage (e.g. RDBMS) to store its data, the new architecture does not necessarily imply that every microservice has his own database: it may mean that a microservice is the only once having access to a specific set of tables, related a well specific  business case.

As a general principle, when decomposing a monolith, I personally think it’s best to start with a coarse grained granularity, and then refactor to smaller services, to avoid premature complexity. This is an iterative process, so you won’t get this right at the first shot. When services start having too many responsibilities, accessing too many different types of data or having too many test cases, it’s probably time to split one service into multiple services.

My last guideline is not to be too strict with the design. Sometimes aggregation is needed at some point (maybe some services keep calling each other and the boundaries between them is not too clear), and some level of data sharing may be also necessary. Remember, this is not a science, and compromises are part of it.

Challenges and pitfalls

If you made this far, you may have already spotted some potential challenges.

Whether we’re migrating from a monolith, or building a new architecture from scratch, the design phase requires much more attention than in the past. The granularity needs to be appropriate, boundaries definition needs to be bulletproof, and the data modelling very accurate, as this is the base we may decide to build our services on.

Since we’re now in the kingdom of distributed systems, we rely heavily on the network for our system to work correctly; the actual bricks which make our application up are scattered at different locations, but still need to communicate with each other in order to work as one.

In this context, there are many dangerous assumptions we could make, which usually lead to failures. We cannot assume our network is reliable all the time, we have no latency, infinite bandwidth, the network is secure, the topology won’t change, the transport cost is zero, the network is homogenous, and so on. Any of these conditions can happen at any time, and the applications need to be ready to cope with it.

So, the first point is making sure our services are fault tolerant; that means adopting the most common distributed systems implementation patterns, like circuit breakers, fallbacks, client side load balancing, centralised configuration and service discovery.

To have full visibility of the status of our services, good monitoring needs to be in place – and I mean more than before (everything fails all the time, remember?). Compared with monolithic architectures, we may have less complexity on the implementation side (smaller and more lightweight services), but we have more complexity on the operations layer. If the company does not have operational maturity, where we can automate deployments, scale and monitor our services easily, this kind of architecture is probably not sustainable.

Another important factor to consider is that in large distributed systems, the concept of ACID transactions does not apply anymore. If you need transactions, you need to take care of this yourself.

It’s not realistic to think we can guarantee the strong consistency we used to guarantee in monolithic applications (where all components probably share the same relational database). A transaction now potentially spans different applications, which may or may not be available in a particular moment, and latency in data updates is a likely thing to happen (especially when we are adopting an event driven architecture – more on this later).

This means we are aiming to guarantee eventual consistency rather than strong consistency. In a real world business case, more than one service can be involved in a transaction, and every service can interact with different technologies, so the main transaction is actually split in multiple independent transactions. If something goes wrong, we can deal with it by compensating operations.

Some of the most common microservices implementation patterns work particularly well in this context, such as event-driven architectures, event sourcing and CQRS (Command Query Responsibility Segregation)…but these are not the topic of this post. In fact, in next week’s blog post I’ll be looking at these architecture patterns in detail. Make sure you subscribe to catch the final post of this series on microservices.

Follow infoMENTUM on Twitter