Your first book about Cloud Computing (Cloud Computing Explained) was very thorough. What prompted you to write again on a similar topic?
The books target very different requirements. The first focuses on how to integrate existing cloud services into the enterprise and leverage them to create new capabilities or improve existing business processes. The second looks at how to create new software and services that leverage cloud computing and can be delivered through cloud-related channels.
Cloud Computing Explained caters to IT managers and CIOs who are considering cloud computing for their enterprise. It provides a high-level overview of the technology and then leads the reader through the process of assessing the suitability of a cloud-based approach for a given situation. This involves calculating and justifying the investment and developing a solid design that considers the implementation along with the ongoing operations and governance that are required to keep the system running.
However, cloud computing is also of interest to many other stakeholders, such as service providers, users and technologists. In Cloud Computing Architected, Risto Haukioja and I look at cloud computing from the vantage point of a software architect who is developing applications that could potentially be hosted in the cloud. This perspective focuses first on how to use cloud-based platforms and services in developing new software. It then explains what is needed in order to make the code “cloud-ready” so that it can scale on demand and accommodate higher resource efficiencies, for example through multi-tenancy.
Who will be most interested in this book?
The primary readership includes Solution Architects who are designing software today. As they consider the ramifications of cloud computing and attempt to use the newest tools, services and approaches available, they will need to tackle many of the topics described in the book. Other likely readers are consultants, technologists and strategists who are involved with the planning an implementation of information technology at large enterprises or who are involved in designing a consumer-facing service that will need to scale to millions of users.
Beyond these, a secondary audience is those who are only peripherally involved in the above design. Almost everyone in IT, from administrators and programmers to high-level executives, will eventually have some contact with cloud computing. Those who wish to have a solid understanding of cloud software architecture will benefit from a conceptual description of the options and design trade-offs.
Why does Cloud Computing need to go deep into the application architecture?
Most, if not all, future applications should be designed to be cloud-ready. Obviously, not all new software will initially target opportunities where Internet scale and multi-tenancy are necessary. However, there is value in designing the solution so that it can grow to meet increasing needs and tap into potential revenue streams that may not have been originally planned.
The incentive for designing an application to be cloud-ready, even when the need is not immediately apparent, is based on three considerations:
- A solid design has intrinsic value.
- Elasticity is mandatory in a fast-changing business environment.
- Intellectual property is most valuable if it is flexible.
A solid design has intrinsic value. A service-oriented architecture allows a greater degree of flexibility over the lifetime of a service, so that it can cope with a changing set of requirements. If reliability, security and scalability are built into the software from the outset, it will be much more robust and will be able to handle unforeseen events, and new sets of demands, more easily.
Elasticity is mandatory in a fast-changing business environment. An increase in customers or sales is not the only reason you may need to scale up your service. A scalable architecture can cope with sudden spikes in demand through mergers and acquisitions. A flexible design will allow the application to be re-used for additional business purposes and user groups, such as partners and suppliers.
Intellectual property is most valuable if it is flexible. There is a compelling argument that even applications built for internal use should be architected for scalability and multi-tenancy. If the service replicates common functionality, then you should question why you need it in the first place and cannot obtain the same service from another provider. On the other hand, if it is unique in its offering and represents proprietary intellectual property, you should always keep the option open to license it, or package it as a service that you offer to create an additional revenue stream.
What are the main challenges for architects and developers?
There are many new challenges and opportunities for software development. At a high level, solutions should be able to scale to a worldwide customer base; they need to exploit all possible efficiencies to remain competitive; and they must accommodate integration with a heterogeneous ecosystem in order to leverage external functionality and preserve focus on differentiating capabilities.
These overriding mandates have many specific implications, such as:
- New tools allow developers to more efficiently develop, test and run cloud services.
- Internet-based delivery encourages the use of the browser, and other lean clients, as a presentation vehicle.
- Multi-tenancy and identity federation impose new authentication models.
- A fragmented landscape of disparate services and providers will not work together unless they are efficiently integrated.
- An exploding information volume, associated with a cloud scale, re-quires new storage mechanisms and data models.
- Availability and elasticity increasingly rely on redundancy that must be orchestrated efficiently and reliably.
- Utility-oriented services require new business models and monetization strategies to achieve profitability.
- Platform services provide efficiencies for the entire software development cycle, making it possible to accelerate deployment and incorporate changes more quickly.
- Increased automation facilitates a more efficient operational model that needs to reorient itself from infrastructure to services.
What are the solutions you propose?
A variety of techniques are available to address these needs. In many cases, the cloud-based platforms include capabilities, such as auto-scaling, that are critical for cloud-based services. In other cases, there are protocols, technologies and services available that help to fulfil the requirements.
Some of the areas that we examine include a variety of mobile platforms that can connect to back-end cloud systems. For the desktop, richer interfaces are forging new territory in the real-time web. Regardless of the presentation layer, there is a need for tight integration including federated authentication using protocols such as OpenID, OAuth and SAML. We have highlighted some of the trends in scalable storage using both SQL and the now popular NoSQL alternatives.
In order to achieve both elasticity and high availability, there is a need to re-factor many workloads and design for horizontal scalability. Frameworks like MapReduce can be very effective in dealing with complex problems that are largely data-based. For reliability, on the other hand, there is a trend toward recover-oriented computing rather than detailed troubleshooting as a means to maximise uptime.
Once the service is functional, it is also necessary to look at how to monetize, deploy and operate it. Successful marketing can include techniques such as search engine optimization, social media marketing, or paid advertising. Likewise the business model can rely on advertising or the inclusion of payment services. The software development lifecycle is currently in flux due to the trend toward distributed code repositories and continuous integration while the elasticity of load and performance testing facilitates faster and more agile coding cycles. On the operational side, we are seeing more configuration and ticketing systems hosted in the cloud. At the same time, DevOps initiatives are spearheading new levels of collaboration between development and operational personnel that benefit both sides.
Do you have any parting words about current trends in cloud computing?
The most important trend that I have observed over the past year is the increased concentration of enterprises on private and hybrid cloud computing. In some cases, this interest appears to be merely a new name for a continuation of data centre optimization initiatives that began some years ago. For example, through increased virtualization, enterprises can leverage many cloud computing benefits without necessarily outsourcing their entire infrastructure or running it over the Internet. Depending on the size of the organization, as well as its internal structure and financial reporting, there may also be other aspects of cloud computing that become relevant even in a deployment that is confined to a single company. A central IT department can just as easily provide services on-demand and cross-charge businesses on a utility basis as could any external provider. The model would then be very similar to a public cloud with the business acting as the consumer and IT as the provider. At the same time, the security of the data may be easier to enforce and the controls would be internal.
While a hybrid model is the most likely end-point for many enterprises, a realistic look at the industry today reveals that we still have a way to go before we achieve it. It is not uncommon to find small startups today that are fully committed to cloud computing for all their service requirements. Large organizations, on the other hand, have been very cautious, even if they recognize the value that cloud computing can bring to them.
Corporate reluctance comes as no surprise to anyone who has followed the adoption path of emerging technologies over the past few years. Legacy applications, infrastructural investment, regulatory concerns and rigid business processes represent tremendous obstacles to change. Even if there are obvious early opportunities, the transition is likely to take time. However, this doesn’t mean that enterprises are completely stationary. In their own way, most of them began the journey to a private cloud years ago and they are gradually evolving in the direction of a public cloud. We can break down this path by identifying three steps, which are each associated with an increasing level of efficiency.
- Resource efficiencies are usually the first objective of a private cloud implementation. Standardization of components sets the scene for data-centre consolidation and optimization. Each level of resource abstraction, from server virtualization to full multi-tenancy, increases the opportunity to share physical capacity, and thereby reduces the overall infrastructural needs.
- Operational efficiencies target human labour, one of the highest cost factors related to information technology. Ideally, all systems are self-healing and self-managing. This implies a high degree of automation and end-user self service. In addition to a reduction of administration costs, these optimizations also enable rapid deployment of new services and functionality.
- Sourcing efficiencies are the final step and represent the flexibility to provision services, and allocate resources, from multiple internal and external providers without modifying the enterprise architecture. This agility can only be attained if all systems adhere to rigorous principles of service-orientation and service management. They must also include fine-grained metering for cost control and a granular role-based authorization scheme that can guarantee confidentiality and integrity of data. On the plus side, the benefit of reaching this level of efficiency is that applications can enjoy near infinite elasticity of resources, and costs can be reduced to the minimum that the market has to offer.
Once the businesses have full sourcing independence, they are flexible in terms of where they procure their services. They can continue to obtain them from IT or they may switch to an external provider that is more efficient and reliable.
Can you share with us the links to your books?
Cloud Computing Explained: Implementation Handbook for Enterprises, Second Edition, Recursive Press 2009:
Cloud Computing Architected: Solution Design Handbook, Recursive Press 2011:
Thanks John !
John Rhoton is a strategy consultant who advises global enterprise customers on emerging technologies with a current focus on public, private and hybrid cloud computing. He speaks regularly on technology and strategy and is the author of six books including Cloud Computing Explained and Cloud Computing Architectured.