im_004426.jpg

Microservices: everything you need to know (part 3)

Author: Matteo Formica

Wait! Have you read part 1 and part 2? You’ll need to cover those out before reading on.

How to decompose a monolith

When it comes to microservices, the million dollar question is: “How do I decompose my monolith into microservices?”. Well, as you can imagine this can be done in many ways, and here I’ll be suggesting some guidelines.

The first step of course is the design. We need to establish our service granularity, then decompose our domain into exclusive contexts, each of them encapsulating the business rules and the data logic associated with that part of the business domain. The architect will be responsible of defining the service boundaries - and this is not an easy task. I’d say that decomposing a business domain is an art, rather than a science. On the other hand, in a monolith, it’s not always clear where a service ends and another one starts, as the interfaces between modules are not well defined (there is no need for this).

To identify the microservices we need to build, and understand the scope of their responsibility, listen to the nouns used in the business cases. For example, in e-Commerce applications we may have nouns like Cart, Customer, Review, etc. These are an indication of a core business domain, hence they make good candidates to become microservices. The verbs used in the business cases (e.g. Search, Checkout) highlight actions, so they are indications of the potential operations exposed from a microservice.

Consider also the data cohesion when decomposing a business problem. If you find data types that are not related to one another, they probably belong to different services.

In a real life scenario, if the monolith uses a centralised shared storage (e.g. RDBMS) to store its data, the new architecture does not necessarily imply that every microservice has his own database: it may mean that a microservice is the only once having access to a specific set of tables, related a well specific  business case.

As a general principle, when decomposing a monolith, I personally think it’s best to start with a coarse grained granularity, and then refactor to smaller services, to avoid premature complexity. This is an iterative process, so you won’t get this right at the first shot. When services start having too many responsibilities, accessing too many different types of data or having too many test cases, it’s probably time to split one service into multiple services.

My last guideline is not to be too strict with the design. Sometimes aggregation is needed at some point (maybe some services keep calling each other and the boundaries between them is not too clear), and some level of data sharing may be also necessary. Remember, this is not a science, and compromises are part of it.

Challenges and pitfalls

If you made this far, you may have already spotted some potential challenges.

Whether we’re migrating from a monolith, or building a new architecture from scratch, the design phase requires much more attention than in the past. The granularity needs to be appropriate, boundaries definition needs to be bulletproof, and the data modelling very accurate, as this is the base we may decide to build our services on.

Since we’re now in the kingdom of distributed systems, we rely heavily on the network for our system to work correctly; the actual bricks which make our application up are scattered at different locations, but still need to communicate with each other in order to work as one.

In this context, there are many dangerous assumptions we could make, which usually lead to failures. We cannot assume our network is reliable all the time, we have no latency, infinite bandwidth, the network is secure, the topology won’t change, the transport cost is zero, the network is homogenous, and so on. Any of these conditions can happen at any time, and the applications need to be ready to cope with it.

So, the first point is making sure our services are fault tolerant; that means adopting the most common distributed systems implementation patterns, like circuit breakers, fallbacks, client side load balancing, centralised configuration and service discovery.

To have full visibility of the status of our services, good monitoring needs to be in place – and I mean more than before (everything fails all the time, remember?). Compared with monolithic architectures, we may have less complexity on the implementation side (smaller and more lightweight services), but we have more complexity on the operations layer. If the company does not have operational maturity, where we can automate deployments, scale and monitor our services easily, this kind of architecture is probably not sustainable.

Another important factor to consider is that in large distributed systems, the concept of ACID transactions does not apply anymore. If you need transactions, you need to take care of this yourself.

It’s not realistic to think we can guarantee the strong consistency we used to guarantee in monolithic applications (where all components probably share the same relational database). A transaction now potentially spans different applications, which may or may not be available in a particular moment, and latency in data updates is a likely thing to happen (especially when we are adopting an event driven architecture – more on this later).

This means we are aiming to guarantee eventual consistency rather than strong consistency. In a real world business case, more than one service can be involved in a transaction, and every service can interact with different technologies, so the main transaction is actually split in multiple independent transactions. If something goes wrong, we can deal with it by compensating operations.

Some of the most common microservices implementation patterns work particularly well in this context, such as event-driven architectures, event sourcing and CQRS (Command Query Responsibility Segregation)…but these are not the topic of this post. In fact, in next week’s blog post I’ll be looking at these architecture patterns in detail. Make sure you subscribe to catch the final post of this series on microservices.

MuleSoft in action: the lowdown on MuleSoft Summit 2017

Mulesoft Summit Banner.jpg

I'm an Integration Consultant and part of Infomentum's wider integration team. As well as being a MuleSoft trainer, I'm a certified MuleSoft developer and an Oracle SOA suite specialist.

Back at the beginning of this year, I received an invite to MuleSoft's London conference on 17th May. As a MuleSoft trainer, I jumped at the chance to attend - and last Wednesday I spent the day getting the latest news, techniques and tips from the world of MuleSoft. If you weren't there well, don't worry, because I've got the lowdown from the day. 

Setting the scene

I'm originally from Italy, and these big events aren't common there, so I have to admit that I was a bit excited to attend my first one. I was imagining a big conference where they make you feel part of something great and special. And the reality didn't let me down.

I arrived before 9am, got a quick look at the Mule himself, and grabbed a bite to eat before the keynote. At 9am, a booming voice from the speakers said that was time for the first talk of day. That was it confirmed: the MuleSoft Summit 2017 was officially starting! Moving to the main stage, I saw more than 1,000 seats available, but what grabbed my attention was the big screen and the scenic design of the stage; yes, I thought, MuleSoft are going all out for this event - it was the big scale conference I'd imagined. 

Keynote

Ross Mason. Topic: How Application Networks are Delivering Agility

After a short video introduction, Ross Mason, MuleSoft's Founder, took the stage. He immediately grabbed my attention: this guy knows how to speak in public. His opening question was straight to the audience: "how many of you are undergoing digital transformation?" After a majority show of hands, the hard-hitting statement was: "If you're not doing digital transformation, then why are you here?". It's clear that MuleSoft is a serious player helping organisations to transform the way they work. At that point, I was even more enthusiastic to be there.

Mason talked about the fast-moving pace of the IT world, and addressed one of the main problems a company faces nowadays during a digital transformation; the gap between demand and delivery. Many organisations are struggling, with not enough IT capacity to satisfy demands from the rest of the business - and that's impacting the customer experience. So what are the possible solutions to reduce this gap? Work harder? Or as Mason put it, run faster?

NO! The answer is APIs, Mason said.

Yes, you read it correctly. APIs.

APIs are simple, flexible, and easy to consume. They help developers to focus on the specific business problem that needs solving - they take us directly to the business case. But still, this is not enough. It's just a panacea, Mason said. So, new slide on the screen, and here comes the solution: API-led Connectivity Approach. What we need is an "application network" which makes it easy to connect to new services, which is easy to update, easy to add/remove connections to external systems and which is built for rapid change. And the API-led connectivity approach, with its architectural concepts of how to split APIs over three different layers, is our best friend to build the application network we want to provide.

After a few more eye-opening slides demonstrating the power of APIs, Mason's keynote ended. But the day moved quickly, and it was time for me to head to the next round of sessions. There were several tracks to choose from, including technical, business-oriented and prospective partnership focussed tracks. I of course chose the technical sessions, specifically the advanced developer sessions. So, here's it goes:

On to the advanced developer sessions

Stanislav Pokraev. Topic: Docker, Kubernetes and API

This session was a very DevOps oriented topic. Normally, this wouldn't have been my first choice of topic. But the title grabbed me, and I was there, so why not.

Pokraev started by introducing what Docker is, what Kubernetes is, and the main concepts behind these two products. Then it was demo time. Well, one word: wow! I'm quite new to containerisation and its management, but this guy introduced me to a new world. Deploy an application to a Mule runtime running on a Docker container. Make more copies of that container, and manage the requests through a load balancer in order to provide high availability to the application. Shutdown a container and see that the application is still available; all of this using just one product, Openshift, a container application platform built over Docker and Kubernetes for container management. Pokraev created some scripts in Maven, some other scripts to create the Mule image and the container, some description files to create pods and controllers, and finally ran everything in one place. Cool!

Jesus De Oliveira. Topic: Platform synergies for agile digital transformation

De Oliveira began the session with an introduction to the Pivotal Cloud Foundry platform, a cloud-native platform, explaining what it is, as well as the result of the collaboration between Pivotal and Mulesoft. Customers who want to create a network of applications, data and devices using MuleSoft can now deploy to Pivotal Cloud Foundry, and manage their application network within Anypoint Platform. De Oliveria showed us live how it works; he created an API definition and an API portal. Then, he published it on Anypoint Exchange and linked it to an application deployed and running on a completely different platform. Very interesting.

2nd advanced developer session

Patrick Pissang. Topic: "Quo Vadis?" Distributed Transactions

In this session, I was hoping for some more of the live demos I'd seen earlier on in the day, but it was a very theoretical topic. Pissang discussed distributed transactions, what they are, and if it's worth using them. Well, his answer was no. To explain that, Pissang talked about some scientific studies that were done to mathematically prove why distributed transactions don't work. It's a difficult topic to talk about in just 20 minutes - I'd need a lot more hours of studying to go into more depth. That said, it's another lesson learnt and a new argument to do some further digging on.

Jerome Bugnet. Topic: Advanced end-to-end security with APIs

We all know too well the problems that can arise due to lack of security, and we all understand the importance of security nowadays. And Bugnet did a great job of getting straight to the point. In his demo, he pointed out how the identity propagation is crucial when using the API-led connectivity approach, and how to make sure that a front-end user, once logged in the front-end application, is automatically logged into the back-end system. The identity propagation flow explained here was the "OAuth Dance" process. With a few steps, Bugnet showed how to implement the OAuth Dance with Anypoint Studio. Very good job.

3rd advanced developer session

Matheus Hermsdorff. Topic: API-led connectivity: a paradigm shift

With this session I found myself in front of a more theoretical topic, and I have to say that I wasn't as excited about this one; I don't know how many times I'd heard about the API-led connectivity approach since the morning. But to my surprise, Hermsdorff gave a different point of view, and focussed our attention onto how many times we faced that specific problem, how many times we struggled with the Canonical Data Model and the SOAP protocol. I ended up enjoying this session a lot. It was very interesting to listen to a fresh perspective on API-led connectivity approach, understanding advantages of its use and how it makes the application easier to design.

Andrew Davey. Topic: Anypoint CLI & Platform REST APIs

The last session of the day, and it was a very technical one. Davey started talking about the Anypoint CLI (Command Line Interface) with a new technique - he asked for a volunteer who'd never used the command line interface to take his place and start typing some commands. He showed how easy it is to change the worker size of an application using very few commands, and by reading the related documentation with the help command. And this was only the first part of the demo. In the second part, he did the magic: he created an API portal and published a RAML on Anypoint Exchange running an application in Anypoint Studio. WOW! Davey created an applications with a series of flows that consumed the exposed Anypoint REST APIs. With a few interactions, the final result was the automation of an API portal creation and its spread over Anypoint Exchange. Amazing.

Mulesoft Roadmap

After a long day of sessions and networking came the moment that everyone was waiting for; the MuleSoft Roadmap. And it didn't let me down.

There's a lot coming out in a short period of time, like the new runtime, Mule 4. But what really shocked me was Anypoint Exchange 2.0 and the Flow Designer. 

The new version of Exchange is completely different. There's a new, smoother design, with a better user experience in terms of search and use functionalities. They presented the Flow Designer and, all I can say is that I'm really looking forward to using and testing it. The Flow Designer is a new component of the Anypoint Platform where developers can build their applications and flows, and synchronise them with Anypoint Studio. Unfortunately, there was no live demo at this point, but we got a sneak peek video of the new look.

Finally, they put an end to our anticipation and announced the dates...Anypoint Exchange 2.0 and the Flow Designer will be released in (..drumroll..): JUNE 2017. So just a (very) short wait and we'll be able to use these new tools that, I'm quite sure, will change our work experience forever.

MuleSoft Summit.jpg

Some of the integration team having a post-summit beer with the Mule: (L-R) Bejoy Thomas, Fabio Persico and Antonio Aliberti 

 

Microservices: everything you need to know (part 2)

Author: Matteo Formica

I’m going to pick up from last week’s post when we discussed what microservices are, and looked at the alternative approach to microservices, aka the monolith. Make sure you read that before carrying on with part 2.

Let’s jump in…

A different approach

Why is the microservices approach different? Let’s explore the main features one by one.

1. Business focussed

The main point of a microservice architecture is that each one of them need to encapsulate a specific business capability.

The focus shouldn’t be on the technology, but on the business case. This complies with the “single responsibility principle”; the service shouldn't be limited to mere data carrying, or reduced to CRUD responsibilities, but instead should encapsulate all responsibilities relevant to the business domain for which it was designed.

This is one of the reasons why the design phase of microservices can be more complex than for a monolithic application; dividing the business domain into exclusive context (this is a concept coming from DDD, or Domain Driven Design) is not a straightforward task.

The word ‘microservice’ itself can be misleading, as the size is not necessarily the compelling factor here, but rather the fact that it must be business focussed, and the choice of technologies we make inside is purely aimed to get the best for the purpose. Anyway, if we look for some sort of size guidance, let’s say it needs to be easy to comprehend for a single developer, small enough to be managed from a small team (see the “2 pizza rule” I mentioned in the previous episode), predictable, and easy to experiment with.

We can see the microservice as a vertical slice of functionality, a micro-silo with multiple tiers (including UI if necessary) and spanning across multiple technologies.

To make things a bit clearer, let’s make an example of a Music Search microservice. Just bear in mind, as I mentioned at the beginning you’ll find a lot different approaches in building microservices, so this is not intended to be the best solution - it’s just one viable solution according to the main microservices design principles.

Like many other microservices, this service exposes it’s functionalities via public REST APIs for other services to consume. But it also contains a set of UI widgets which could be embedded from a portal or external website.

The search capability does not span across other services - everything is included in here. For this reason, the storage technologies (in this example Apache Cassandra and PostgreSQL) are included inside the microservice; the search business logic is the only one accessing this data, so there is no reason to have them outside.

This way, the service is self-contained, as it includes all of its dependencies, isolated from the other microservices. All it needs to expose to the outside world is the public APIs and UI Widgets for others to consume or embed.

A single team is responsible for maintaining this whole service in its entire lifecycle, from development to release.

2. Open, lightweight, and polyglot

Sticking to Fowler’s definition, in a microservices architecture the applications should communicate with each other using lightweight and open protocols and payloads. This will guarantee the reusability and the performances of these services. So, depending on the pattern we choose (request-response, event-driven or hybrid) we will choose protocols like REST/HTTP, JMS or AQMP for example, and the payloads will likely use JSON.

A big advantage in this kind of architecture is not having a long-term commitment to any technology stack.

This gives us the possibility to choose the best language/framework suited for the purpose:

In this example, we might decide to implement:

  • Search service using Spring Boot (Java), Elastic Search as search engine and Cassandra as storage.
  • Reviews service with NodeJS and Express, Solr as search engine and Mongo as storage.
  • Shopping cart service with Scala and Spray, using Redis as a cache to store the cart items.
  • Contact service with Dropwizard (Java based) using PostgreSQL as storage.

What do these services have in common? The answer is…nothing, apart from the fact they communicate between themselves via public APIs or events (we will get to this later on). Every microservice is free to adopt the language or framework that is most appropriate for this business domain. They also use different storage technologies. The concept of pan-enterprise data models and shared RDBMS does not apply anymore. It doesn’t mean RDBMS is gone for good (not at all), but it won’t be used anymore as a common way to share information, tightly coupling application together and preventing scalability.

You may notice that in the figure above the API (the service interface) is separated from the actual service implementation (which is tied to a technology); this is just to stress the importance of separating the interface of the microservice (you can see the API as the “door” to access the service) from its implementation (see ‘Loosely Coupled, Isolated’ section later on).

3. Scalable

By nature, microservices need to be easy to scale horizontally, to be able to handle flexible workloads.

In order to achieve this, they need to be reasonably small in size (perfect candidates to be distributed via Docker containers), isolated and stateless. Once these prerequisites are filled, we can leverage the features of the most popular container management platforms, like Docker Swarm, Kubernetes, or Apache Mesos, and scale our application easily:

When we’re talking about scaling, the financial factor needs to be considered; it’s much cheaper to scale out containers (inside the same physical or virtual machine), rather than scaling out entire machines. With monolithic applications, scaling the entire physical or virtual machine may be the only choice we have. This is because the components of the monolith cannot be scaled individually, and because there are many more external dependencies than individual microservices. Basically, we need many more resources (physical and financial) to be able to scale out.

4. Loosley coupled, isolated

Dependencies between services are minimised by defining clear interfaces (APIs), which allow the service owners to change the implementation and underlying storage technologies without impacting the consumers. This concept is not new at all; it’s one of the basics of Service Oriented Architecture.

Every microservice owns its own data and dependencies, so there’s no need to share this with anyone else; it contains everything it needs in order to work correctly. Let’s bear in mind though, in real life scenarios there may be the need for some microservices to share some sort of session, or conversation. In these cases, a possible solution is to consider using distributed caching solutions (e.g. Redis or Coherence).

But if we’re doing this the best possible way, the only resource a microservice is supposed to expose is its public APIs. (Note: this is not true if we adopt a full event-driven approach – more on this in the last episode).

The external dependencies the microservices usually have are the platform capabilities themselves. The functionalities to start, stop, deploy, monitor or eject metadata inside a microservice cannot be part of the service itself, but rather a feature of the platform we are using to distribute them.

5. Easy to manage

Now let’s have a look at microservices management from the DevOps point of view.

Microservices and DevOps.png

In a scenario with multiple independent applications, it’s likely that we’re going to have one dedicated team in charge of the development and deployment for each one of them.

In some cases, some services may be small and simple enough that one mid-size cross-functional team could maintain them all.

Since the services are business focused, the code base should be reasonably small to digest in a short time, and a new developer should be able to make changes on his first day on the project.

The deployment of a microservice is completely independent from the others, so whenever a change is ready it can be deployed at any time. The same team is responsible for development, testing and release, so there is no need for a separate team to take control of the release process.

If you think about it, this is exactly what DevOps is about; building, testing and releasing software happens rapidly, frequently and reliably, whereas development team and operations team tend to become the same thing.

Fault Tolerant

Keep in mind, in a microservice architecture every component is potentially a client and a server at the same time; every service needs to be able to work (or at least know what to do) when one of its dependencies (i.e. services it needs to call according to his business function), is down.

Since in distributed systems “Everything fails all the time” (as Amazon’s Werner Vogels reminds us), every microservice needs to be designed for failure, and circuit breakers need to be in place to prevent individual service failures to propagate through a large distributed system.

This implies that the larger our distributed application is, the more monitoring we need (much more than we need for a monolith), to identify any failures in real time.

Microservices: Service Oriented Architecture in disguise?

A common misconception is to consider these two approaches as alternative to each other, but actually, the microservices approach is actually based on a subset of SOA patterns.

As an architectural style, SOA contains many different patterns, like orchestration, choreography, business rules, stateful services, human interaction, routing, etc. Mainly, SOA is focused around the integration of enterprise applications, usually relying on a quite complex integration logic to integrate simple “dumb” services; a SOA architecture usually relies on centralised governance.

On the other side, microservices are more focused on decomposing monoliths into small independent applications, focusing on a small subset of SOA patterns, in particular choreography and routing. The microservices don’t need a centralised governance to work with each other, they simply regulate their behaviour to interact smoothly with each other. The integration logic is relatively simple (routing and nothing more), while the complexity is actually moved into the business logic of the service implementation itself.

As an example, think of a SOA as big cross junction, and microservices as a crowd of pedestrians. Without governance (i.e. traffic lights, signs, lanes) in a cross junction, cars would crash into each other and traffic jams would happen all the time. The system wouldn’t work at all. This doesn’t happen with a crowd of pedestrians; the pedestrians regulate their speed and behaviour smoothly and naturally in order not to crash into each other, and there is no need for a third party to tell them how to do it.

In summary, borrowing the definition of Adrian Cockroft (previous cloud architect at Netflix and now AWS) we can define microservices as a “fine grain SOA”:

Microservices are fine grain SOA.png

In a later blog post, I’ll be taking a deeper look into some of the challenges of using microservices and possible approaches to decompose a monolith. We’ll introduce event-based microservices, and have a quick look to technologies and frameworks we can use to implement microservices, along with some of the most popular cloud platforms to distribute them.

Subscribe to the Infomentum blog to receive updates!

Microservices: everything you need to know (part 1)

Author: Matteo Formica

The discussion on microservices has exploded recently. It’s been heralded as the future. But is it really so new, or something more familiar than we think? Well, let’s start by setting the scene; what are microservices? Unfortunately, there is no universal and unique definition. In the most generic way possible, this is an “architectural style”, so it can be implemented in different flavours, and can be defined in many different ways. I personally think that the definition of Martin Fowler is the most clear and exhaustive:

“In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

I like this definition because it captures some of the key features that the industry tends to agree on:

  • A microservices architecture is the result of a decomposition of a single application into a suite of smaller services.
  • Every microservice is a stateless, isolated and independent process.
  • They communicate with each other using open protocols (such as REST/HTTP, messaging).
  • They can be written using different languages / frameworks.
  • Each microservice encapsulates a specific business capability.
  • Every microservice must be independently deployable, replaceable and upgradeable.
  • They need a bare minimum of centralised governance.
  • They must be easily scalable.

Despite the industry interest in microservices at the moment, they aren’t an entirely new concept; we’ll see later on that microservices are actually one of the many ways to implement a SOA architecture, focusing on a few specific patterns.

In general, explaining microservices is easier when making a comparison with the traditional “monolithic” applications…

Once upon a time: the monolith

Monolith application.png

Way back in the “traditional” web applications of the 90s and 2000s, we would probably see something like the above: a typical MVC structure where the services are accessing data from a single centralized RDBMS, which is probably shared by many other applications across the entire enterprise.

The services in this context are better described as “libraries”, i.e;

  • They are part of the application
  • They don’t have a very clear interface definition (as they don’t need it)
  • They call each other using proprietary and closed or heavy protocols
  • They share the same data
  • And, most likely, they won’t be very reusable outside the context of this application.

The controller layer is responsible for invoking the services and rendering the data on the front-end, serving only one type of client (typically a web browser) - mobile apps and other devices weren’t a priority at the time.

Monolith image 2.png

From the deployment point of view, the whole application is packaged into a single artefact (e.g. a WAR or EAR), and the deployment team is responsible for deploying it across environments (CI, CD and DevOps were not a thing back then!). This means that a single change in one of the libraries implies a redeployment of the whole application, and worse, that a problem at the end of the pipeline can block the entire release, preventing many other changes to go live. When the application grows in size, the code base may be split in different repositories, and different teams may be responsible for different parts of the code. But the deployment still represents a potential bottleneck.

The application above is very likely to be implemented using a specific language or framework. This decision was made at the design phase before the application was implemented, and now moving to a different technology stack is unrealistic; who’s going to pay for that?

Finally a note on the actual development team; a large codebase cannot be digested from a single team, and when the team becomes too big the maintenance becomes more difficult. I’m a fan of Amazon’s “two pizza rule” when it comes to development teams; according to the rule, you should never have teams so big that two pizzas couldn't feed the entire group.

So that’s the history of the monolith. But why are microservices a different approach, and what benefits can they offer us? I’ll be diving into that in next week’s blog post.

What to be notified when that blog is live? Subscribe to receive updates! 

The Good, the Bad and the Code: Oracle Code London 2017

Author: Amr Gawish

455819679a03462c18b95431d9329a05.jpg

Oracle Code Conferences started in March this year, in the red city itself - San Francisco. The event is doing the rounds worldwide, and I attended my local one in London last week with my Infomentum Colleagues. My initial thought was that it was really interesting to see how Oracle is attracting a different audience this time around; more technical oriented attendees, with a bigger spectrum of technical skills.

Oracle Code is sponsored by Oracle Developers (previously known as Oracle Technet). They had a great pool of presentations and technical sessions, talking about all subjects like Microservices, Node.JS, CQRS and more. The sessions were great, and I personally enjoyed all that I attended. Luckily, all sessions are recorded and watchable via their Youtube channel for those who couldn't attend. But now on to what you really want to know...what's direction is Oracle going?

The Good

I loved the new approach Oracle is taking with its audience. Oracle understands now that empowering developers will increase adoption and exploration of its different Cloud offerings, and with these events I'm guessing Oracle stock between developers is going to increase. With this in mind, I constructed a small list of things that Oracle correctly nailed with this event.

1. Simple"r" Cloud Architecture

Oracle is now starting to provide developers with a lot more options to fit different requirements. Oracle Cloud Container is one example, and the recent acquisition of Wercker is another example that Oracle is embracing the containerization approach. Another example is Oracle Application Cloud Service, which focuses more to empower Microservice / Serverless styled application - and with their simple RESTful APIs and Command Line Interface (CLI) SDK, it can be automated within any Continuous Integration environment.

2. Giving a chance for other Technologies and Frameworks to shine

One thing that was obvious in the event (that I believe was intentional), was Oracle showing it is not a Java-only company anymore. There were a lot of presentations about Node.js, and more focus on the right language for the task using Microservices, rather than showcasing a single language/technology stack, which was definitely welcomed by developers.

3. Offer something for different sizes of businesses

Oracle is pushing the "Pay as you go" approach, which can fit all different sizes of business and can provide a good alternative to Amazon, Google and Azure. It also revamped the whole cloud infrastructure using Oracle Bare Metal, and at first glance it looks very promising.

4. Following the trends closely

Oracle code was also showing that Oracle is aware of different technological trends, and is giving developers options to utilise them instead of forcing its own agenda - which is a great approach in my opinion.

The Bad

While Oracle did an amazing job in the event, there a few things that I would have loved to see or get answers for. However, since this is not an official Oracle conference, they were not obliged to do so!

1. Middleware stack fate

Oracle PaaS was the strength Oracle used to get into the Cloud market. This is changing right now, and while these PaaS products still there, there were no mention of any of the Middleware stack and how these products are going to adapt to change in the future.

2. Oracle Cloud checkout is still hard!

Oracle Code gives $300 in credit for Oracle Cloud. Claiming that is a different story though. You have to provide payment information regardless (and the payment fails a lot for some reason!) and adding cloud services to your account, whilst simpler than before, is still missing a lot of features (a simple search feature would be nice!). While this is not really Oracle Code's fault, it just shows that Oracle still needs some housekeeping in order to compete more effectively in the cloud space.

3. Current Oracle product developer base

Oracle Code Conference focused on gathering a lot of technical skills. Most of the talks and sessions were more focused around trendy subjects, and it is quite unknown how current Oracle developers can adhere to the new ways Oracle is pushing. Again, this is not the event's fault, but it would have been nice to gain some insights about these questions. 

Conclusion

Overall, I think it was a great conference. I had a great time, and I got a free t-shirt to show for it!

Innovation through a different lens

Author: Infomentum

By Nelena Paparisva

In the words of Annie Dillard, “If we are blinded by darkness, we are also blinded by light”. Paradoxically,Prof Venganti speaking at Oracle Modern Business Experience being in the light and having too many ideas can be as fruitless as being in the dark and having none. These were the opening words of Professor Verganti’s talk at Oracle’s Modern Business Experience London conference, which caught my attention. Two minutes in, I had my notepad out. For a long time, I had been a believer of ‘the more, the merrier’ when it comes to ideas, and had somehow paid no attention to how that compromises quality and value.

The digital era we live in enables us to access creativity; we don’t struggle to think of ideas. In fact, we have even devised processes to help us generate them. What we struggle with, is filtering them. Ideas are commodities. If we chase everything, we get nothing. The challenge of innovation is finding which ones are meaningful. How do we design meaningful products in a world awash with ideas?

The answer lies in the relationship between people, meaning and solutions. People create the meaning (the ‘why’) which leads to the solution (the ‘how’).

Let’s take a look at two examples:

Candles. Why did people buy candles in the 70s? What meaning did candles have for them? They bought them because they wanted their house to be lit in the event of a power cut. Today, in the 21st century when power cuts are almost unheard of, Yankee candles are becoming more and more popular. Why? Because the meaning of Yankee candles is different to that of traditional candles, despite them practically being an almost identical solution. Yankee candles were not designed to keep a house lit; they were designed to create a warm and welcoming atmosphere. People fall in love with the why, not the how.

We can apply the same logic to photography. Kodak held a dominant position in photography for the majority of the 20th century. Why? Because Kodak focused on the meaning of their solution being memories. Today, with the hype of social media and the world of ‘online’, this meaning has changed. Understanding that change brings innovation. Snapchat approached the meaning of photography differently. The meaning now was communication - people being able to communicate quickly using short-lived photography to pass a message, for example, sending a selfie in front of the Big Ben with the caption “Exploring the capital!”. It was not about creating long lasting memories anymore. This realisation was important enough to attract 156 million users worldwide within 5 years of the app’s initial release.

The standard text on innovation suggests gaining input from outsiders. This can be effective at improving products, but it does not capture bigger opportunities in the marketplace. Innovation does not come from users, it comes from vision – candle users of the 70s would never make us think of Yankee candles. The vision is created from the inside out. It is ideation which happens from the outside in. In other words, don’t start from the market; start from the organisation.

In a world where ideas are abundant but novel visions are rare, innovation driven by meaning is what makes a difference. If something is meaningful both for the creators and the consumers, business value will follow.

Based on Prof Roberto Verganti’s talk at Oracle’s Modern Business Experience Conference, 2nd Feb 2017.

http://www.verganti.com/

2020: the year of the employee?

Author: Infomentum

Author: Rachel Edwards

Workplaces are changing rapidly – often without us even noticing. And now that the first wave of digital disruption has already passed, employees are demanding more sophisticated experiences from the companies they work for. This means that, as businesses, we have a choice: change, or get left behind. One way of ensuring that we move with the times is to listen to the changing expectations of employees as we move towards the next big milestone: 2020.

That’s all very well, but how can we approach this change beyond digital whilst still keeping employees on side? To investigate the shifting digital scene, Infomentum carried out a survey with over 1000 office workers to try and gauge the reality behind employee expectations.

So, what do the employees of 2020 actually want?

The skiver vs. the flexible worker

Right now, 41% of employees want to work from home. But 62% of bosses won’t let this happen. Are these flexible workers skiving, or the people of the future? Who’s right in this situation: the employee or the employer? With the number of workers demanding flexible hours set to rise, the answers are not so straightforward.

Whilst there are fears that out-of-office work may lead to lower productivity, it actually appears that the reverse is true. By adopting cloud document management systems your business can promise better collaboration between departments, and greater flexibility throughout the workforce. Working from home need no longer be a hindrance, but perhaps your greatest asset. Because, let’s face it, the organisations that are agile enough to let their employees work remotely will see the greatest benefits, both physically and technologically. Our previous report into Generation C, the connected generation, illustrates these benefits – exploring why fewer distractions, and less stressful environments lead to happier employees and greater company success.

Have you upgraded?

It’s time to listen to the 91% of employees who believe that their employer will no longer be competitive by 2020. Yes, this might seem like just another scare story. But, guess what: the world has already changed and those non-upgraders are being left behind in the wake of this digital boom.

We’re now looking beyond digital for the workplaces of 2020. Employees want their organisations to harness the flexible working technologies available in order to boost business success. And, there will be merit behind this. Once tasks become increasingly automated, employees will be able to devote more time to strategic thinking and generating new ideas.

The secret success of Gen C employees

In 2020, the employers who embrace the forward looking attitudes of Gen C will be the most successful; it is these members of the ‘connected generation’ that are driving the pace of change. Their hardworking and increasingly flexible mind-sets will be your greatest asset – perhaps not such a secret, but still a truth easily forgotten. Attempts to enforce top-down controls will merely limit workforce motivation and who wants that? So, instead, it is time to listen to the demands of the 2020 workforce.

So there you have it: a snapshot of the changing expectations of your 2020 employees. Want to pinpoint the specific areas that will work for you as we all move beyond digital? We thought as much. Read more in the full report: ‘Beyond Digital: what’s next for businesses in 2020?’.

2020: what are your customers expecting?

Author: Infomentum

Author: Rachel Edwards

77% of users claim they leave a site immediately if they experience any difficulty. It’s a shocking stat. One that perfectly illustrates the need for business to continually improve and innovate to keep up with progressive consumer demands.

But I hear what you’re thinking. We've heard this before, but how do we know where consumers will go next? The truth is, nobody knows what 2020 holds. To gain insight into what the market is expecting, Infomentum carried out a survey with over 1000 office workers to look into their opinions, behaviours and expectations for 2020 as both a customer and an employee.

We’re all Generation C

In case you hadn't noticed, age demographics are over. In the age of Generation C, the connected generation, it’s all about linking people through their shared behaviour, interests and expectations. Back in 2014, when we carried out research into Gen C, 54% of respondents identified themselves as part of the connected generation. With the internet embedded in every area of our lives and digital technology booming fast, the Gen C demographic will only continue to grow.

So much, in fact, that the research predicts that by 2020, Generation C will be the dominant psychographic amongst both customers and the workforce. What are Gen C expecting from you?

Buying into 2020

In 2020, it’s not going to be enough that your website is mobile ready; mobile will mean more than just a smartphone. With hyper-connected consumers who are always on the move, they’ll expect an overhaul of the whole buying process.

The 2020 sales Your customers expect an overhaul of the buying process

With the rapid pace of technology advancements it’s not unfeasible that this type of sale could become a reality.

What does it mean for businesses?

We’ll come back to your website because, let’s face it, if your website isn't ready now then it’s time to start working quickly or risk being left behind in the digital boom. It’s not about jumping straight into the 2020 sale by buying into all of the latest technology with no roadmap. Businesses need a solid strategy, a vision and a set of goals to achieve this. Armed with this, you can assess the state of play in your business currently, identifying gaps between where you are now and where you want to be. Then, and only then, is it time to look at technology.

Read more on how you can prepare your business for 2020 in the full report: ‘Beyond Digital: what’s next for businesses in 2020?’.

Back to the future – this is 2020

Author: Infomentum

Author: Rachel Edwards

Digital transformation; it’s been the business buzzword of choice for the last 2 years or so. It’s proven to be one buzzword which has some meat behind it. Digital transformation is still a hot topic and has manifested in tangible success for many businesses and even charities like The Prince’s Trust.

Like all trends, digital transformation means different things to different people. And like all trends, it must eventually fade until ‘digital’ just becomes the norm of how we do business. But what comes next?

Beyond digital?

Technology is evolving at such a rapid race. It’s continually pushing the boundaries of what we ever thought possible in our wildest back to the future fantasies.

Noone can say for certain what the future holds. To try and understand what could be coming, it’s important to understand the state of play today. Infomentum surveyed 1000 office workers to find out their current opinions, attitudes and experiences as both employees and customers. By finding out what motivates people now, we can begin to consider how 2020 may look; and more importantly, how business can prepare.

Thriving or skiving?

Employees feel they’re thriving. They’re embracing new technologies, and using it to their advantage. 39% of office workers are actively using social media to communicate and collaborate. But even in 2016, many bosses still view social media as skiving. The same goes for expectations of working from home; employees want it, but many bosses are still not open to the idea.

Employers need to ensure that their staff can access the same information as the office anywhere in the world, to remove the ‘skiving’ label from remote working. Businesses that are agile enough to allow their staff to move without constraint, both physically and technologically, will see the greatest benefits.

Rise of the fickle consumer

Worried your website is sub-par today? You should be. 77% of users claimed they would leave a site immediately if they experienced any difficulty. And guess what? They’ve probably gone straight to your competitor.

As we move towards 2020, consumer expectations will pressure brands into behaving in a way that best suits them. The businesses that aren’t prepared for a fast pace of change will get left behind.

This is just a snapshot. Read the full story on the state of play today, and find out how your business can prep for 2020 in the full report: ‘Beyond Digital: what’s next for businesses in 2020?’.

Omnichannel: the key to retail success?

Author: Infomentum

Author: Vikram Setia

No industry has seen the effects of digital disruption hit hard quite as publicly as the retail sector.

Shock headlines detail the decline of footfall, with indignant images of boarded up stores and empty high streets. Pre-Christmas forecasts preached that the most successful holiday retailers would be those offering the optimum online shopping experience.

But aside from the scare-mongering surrounding physical stores, the key focus for retailers has remained steadfastly the same; how to offer the best customer experience? There are key challenges the industry can address today to keep up with changing consumer demand.

Business Mistelligence?

You have key decisions to make – from your store strategy, to new growth areas and how to engage online for loyalty – and you need to make an informed decision. You turn to your data to inform your decision, right? That’s not the case for many retailers who are struggling to gain insights from the masses of data available to them. Taking control of your data and having the power to display it in ways which are easily digestible allows you to turn your big data into rich data – information that has the power of context.

Lack of loyalty

We all have more information at our fingertips than was even imaginable 20 years ago. Great for consumers, who are more empowered than ever before. But for retailers, this makes for a fickle customer. Customer loyalty is at an all-time low, with their demands constantly increasing. Retailers offering added value to their customers above and beyond their competitors can take share from anyone who falls short.

The bricks-and-mortar story

It’s true that physical stores are seeing a downturn in the number of customers they serve – but there is no doubt the shop has a place for many retailers. Giving customers the maximum amount of convenience is key in our fast-paced, digital world. Retailers who are able to use digital technology to enhance the instore experience, and give the customer the impression that physical and digital have merged, are onto a compelling experience which will be hard to beat.

Much of this comes down to offering something the customer really wants. The consistent theme is the omnichannel experience.

The omnichannel enigma

You know the story as both a retailer and a consumer yourself; customers expect the same experience, no matter the channel. Now, more than ever, there is a plethora of channels for a customer to contact and interact with your brand. Retailers that can offer a holistic experience across every touchpoint of the customer’s journey will have the advantage.

The key is in removing silos. If each department and each employee has access to the same customer information, it will have a hugely positive reflection on the customer’s experience.

It’s time to stop the shock headlines and look at the challenges for the opportunities that they present. By addressing these challenges today, retailers can set themselves up for success tomorrow. Why now? Because today, your customer is demanding. Tomorrow, they will expect even more.

Subscribe to the Infomentum blog for updates on the impending launch of our latest research report which takes a look at how businesses need to prepare for 2020.

Follow infoMENTUM on Twitter