Quantcast
Channel: Analyst POV
Viewing all 87 articles
Browse latest View live

Open Source and OpenStack: Complexity and lack of knowledge raise risks

$
0
0

Open source solutions offer a cost advantage with their free licenses or low license costs. However, there are costs associated with these solutions that should not be neglected. On the one hand, specialized knowledge is necessary to build and develop open source cloud infrastructure. On the other hand, administrators have to ensure proper infrastructure operations, for which task extensive skills regarding solution administration and maintenance are requisite. In most cases, these skills are acquired via external expert knowledge such as advisory or developer resources.

Furthermore, users who decide to build their cloud on pure open source software, are limited to the support dependency of the open source project. This could be tough and painful, since support is generally provided by forums, chats, Q&A systems and bug tracking systems. In addition, it is well received when users are actively participating and playing a part in contributing to the project, a behavior generally not needed in the commercial world of software. Luckily, commercial-oriented vendors of open source cloud software have already identified the service gap and provide support through professional services, as part of special license plans for enterprises.

Despite a high level of flexibility and openness as well as a decrease in license costs, it is inherently important to understand that OpenStack is not just a piece of software. Instead, the open source cloud management solution is based on a collection of specific components, among others, for compute (Nova), storage (Swift, Cinder) and networking (Neutron) capabilities, which must be tightly integrated to build a complete and powerful OpenStack based cloud environment. The whole project takes in new functionalities with each release. In addition, a community of developers and vendors participates with further add-ons and source code for maintenance purposes and other improvements. Therefore, the use of OpenStack can lead to unpredictable complexity and overall risk increases.

IT organizations who attempt to deal with this complexity on their own by integrating all components from scratch and being up-to-date at all times, tend to expose themselves to the risk of creating their own, unmanageable cloud solution instead of using an industry compliant standard. The precise customization of OpenStack to the individual company requirements can easily lead to an OpenStack environment that is incompatible with external OpenStack based cloud infrastructure. Thus, the connection of internal and external cloud infrastructure in a hybrid scenario becomes quite tricky.

The increasing relevance of OpenStack as a central technology component within cloud environments leads to a higher demand for specialized consultancy, integration and support services. This market is still in the nascent stage and big IT vendors are currently training and improving their staff knowledge. After all, the present supply of readily trained, skilled and experienced OpenStack administrators, architects and cloud service broker is negligible. CIOs should immediately plan how to build basic skill levels for OpenStack within their IT organizations. Even if specialized service contractors can help during the implementation and operation of OpenStack based clouds, IT architects and managers should still have the main responsibility and know what is happening. OpenStack is not an instant meal that just needs to be warmed up but rather a complex technology platform composed of several individual components whose configuration is rather matching the preparation of a multi-course gourmet dinner. Skills and passion are on the most wanted list.


Amazon WorkMail: Amazon AWS is moving up the cloud stack

$
0
0

For a long time the Amazon Web Services portfolio was the place to go for developers and startups which used the public cloud infrastructure to start test balloons or try to make their dream come true to become the next billion dollar company. Over the years Amazon understood that startups don’t have the biggest jewel cases and that the real money comes from established companies. New SaaS applications have been released to address enterprises, which still haven’t found their way to the Amazon cloud. The next coup is Amazon WorkMail a managed e-mail and calendar service.

Overview: Amazon WorkMail

Amazon WorkMail is a fully managed e-mail and calendar service like Google Apps for Work or Microsoft Office 365/ Microsoft Exchange. This means that a customer doesn’t have to administrate the e-mail infrastructure and the necessary servers and software and only need to take responsibility for managing the users, email addresses and security policies at user-level.

Amazon WorkMail offers access via a web interface, supports Outlook clients and mobile devices via the Exchange ActiveSync protocol. The administration of all users is handled with the recently released AWS Directory Service.

Amazon WorkMail is integrated with several existing AWS services like AWS Directory Service, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS) and Amazon Simple Email Service (SES). The integration with Amazon WorkDocs (former Amazon Zocalo) allows sending and sharing documents within an email workflow.

E-Mail as a starter drug

E-Mail. A no-brainer? You’d think. However, after IBM and Microsoft recently invested in this topic e-mail is still a slow-burner. E-mail belongs to the category of “low hanging fruits”, thus those products with which it is possible to gain success quick without much effort.

In the case of Amazon WorkMail the portfolio extension is a logical step. The service catalogue development with services like Amazon WorkSpaces (Desktop-as-a-Service) and Amazon WorkDocs (File-Sync and Share) is especially targeting enterprise customers for whom the Amazon cloud wasn’t a contact point so far. This has different reasons. The main reason is that the Amazon cloud infrastructure is a programmable building block and primarily attractive for those who want to develop own web-based applications on it. With the help of “Value-added services” respectively “Enablement services” an additional value can be created out of the pure infrastructure resources like Amazon EC2 (compute) or Amazon S3 (object storage). Because at the end of the day an EC2 instance is just a virtual machine and offers no additional value of its own volition.

Most of the companies, who want to deal with low complexity and less effort on infrastructure level and gain success in the short run, are overstrained with the self-service modus of the Amazon cloud. For this, most of them lack the necessary cloud knowledge and developer skills to ensure scalability and high-availability of the virtual infrastructure. In the meantime the AWS service offering is versatile but still addresses real infrastructure professionals and developer.

The continuous portfolio development lets AWS moving up the cloud stack. After infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), WorkSpaces, WorkDocs and WorkMail make sure Amazon finally arrived in the software-as-a-service (SaaS) market. Amazon has started to use its own cloud infrastructure to offer higher value services. Oracle is exactly doing the opposite. After the database giant started on the SaaS layer it is now moving down the cloud stack to IaaS.

E-mail is still an important business process in the enterprise world. Thus, it is just a logical step for Amazon also to be part of the game. At the same time WorkMail can act as a starter drug for potential new customers to explore the Amazon cloud and find other benefits. Furthermore, the partner network of system integrators can use Amazon WorkMail to offer its customers a managed e-mail solution. How successful Amazon WorkMail will be remains to be seen. Google Apps, Microsoft Hosted Exchange, Zoho or Mailbox.org (powered by Open-Xchange) are only some mature solutions on the market.

In the end one important issue must be considered: IaaS the Amazon style is the ideal way to develop own web applications and services. Managed cloud services and SaaS are helping to adopt new technologies in the short run. Amazon WorkSpaces, WorkDocs and WorkMail belong to the last category.

The way to the holy IaaS grail

$
0
0

In 2014, cloud computing finally arrived in Germany. A current study by Crisp Research under 716 IT decision makers shows a representative picture of the cloud adoption in the DACH market. For 19 percent of the sample cloud computing is a regular part on the IT agenda and production environments. 56 percent of the companies are already planning and implementing cloud services and technologies and using it for first projects and workloads. Crisp Research forecasts that German companies will spend around 10.9 billion EUR on cloud services, technologies as well as integration and consulting in 2015. Therefore, more and more companies evaluate the use of infrastructure-as-a-service (IaaS). For German IT decision makers this raises the question which selection criteria they must consider. Which deployment model is the right one? Is a US provider in general unsecure? Is a German provider mandatory? Which possibilities stay after Snowden and Co.?

Capacity Planning, Local or Global, Service?

Before using IaaS there is the fundamental question how and for which purpose the cloud infrastructure will be used. In this context the capacity planning plays a decisive role. In most cases companies know their applications and workloads and thus can estimate how scalable the infrastructure regarding performance and availability must be. However, scalability must also be considered from a global point of view. If the company focuses mainly on the German or DACH market a local provider with a data center in Germany is enough to serve the customers. If the company wants to expand in global markets in the midterm, a provider with a global footprint is to recommend who also operates data centers in the target markets. Following questions are:

  • What is the purpose of using IaaS?
  • Which capacities are necessary for the workloads?
  • What kind of reach is required? Local or global?

Talking about scalability the term “hyper scaler” is often used. These are provider whose cloud infrastructure theoretically is capable to scale endlessly. To these belong Amazon Web Services, Microsoft Azure and Google. The term endlessly should be treat with caution. Even the Big Boys hit the wall. Finally the virtual infrastructure based on physical systems and hardware doesn’t scale.

Companies who have a global strategy to grow into their target markets in the midterm should concentrate on an international operating provider. Among the above named Amazon AWS, Google, and Microsoft also HP, IBM (Softlayer) or Rackspace come into play which are operating a public or managed cloud offering. Who sets on a “global scaler” from the beginning gets an advantage later on. The virtual infrastructure and the running applications and workloads on top of it can be deployed easier to accelerate the time to market.

Cloud connectivity (low latency, high throughput and availability) should also not be underestimated. Is it enough that the provider and its data centers are only able to serve the German market or exists a worldwide-distributed infrastructure of data centers, which are linked to each other?

Two more parameters are the cloud model and the related type of service. Furthermore, hybrid and multi cloud scenarios should be considered. Following questions are:

  • Which cloud model should be considered?
  • Self-service or managed service?
  • Hybrid and multi cloud?

Current offerings distinguish public, hosted and managed private clouds. Public clouds are built on a shared infrastructure and are mainly used by service providers. Customers share the same physical infrastructure and are logically separated based on a virtualized security infrastructure. Web applications are an ideal use case for public clouds since standardized infrastructure and services are sufficient. A hosted cloud model transfers the ideas of the public cloud into a hosted version administered by a local provider. All customers are located on the same physical infrastructure and are virtual separated from each other. In most cases the cloud provider operates a local data center. A managed private cloud is an advanced version of a hosted cloud. It is especially attractive to those companies who want to avoid the public cloud model (shared infrastructure, multi-tenancy) but do not have the financial resources and the knowledge to run a cloud in the own IT infrastructure. In this case, the provider operates an exclusive and dedicated area on its physical infrastructure for the customers. The customer is able to use the managed private cloud exactly like a public cloud but on a non-shared infrastructure, which is located in a provider’s data center. In addition, the provider offers consultancy services to help the customer to transfer its applications and systems in the cloud or to develop them from scratch.

These hyper scaler respectively global scalers named above are mainly public cloud provider. Offering a self-service model the customers are responsible for building and operating the virtual infrastructure respectively the applications running on top of the infrastructure. In particular cloud player like Amazon AWS, Microsoft Azure and Google GCE are offering their infrastructure services based on a public cloud model and a self-service. Partner networks are helping the customers to build and run the virtual infrastructure, applications and workloads. Public cloud IaaS offerings with a self-service are very limited in Germany. The only providers are ProfitBricks and JiffyBox by domainFactory. However, JiffyBox’s focus is on webhosting and not enterprise workloads. CloudSigma from Switzerland should be named as a native provider in DACH. This German reality also reflects the provider’s strategies. The very first German public IaaS provider ScaleUp Technologies (2009) completely renewed its business model by focusing on managed hosting plus consultancy services.

Consultancy is the keyword in Germany. This is the biggest differentiator to the international markets. German companies prefer hosted and managed cloud environments including extensive service and value-added services. In this area providers like T-Systems, Dimension Data, Cancom, Pironet NDH or Claranet are present. HP has also recognized this trend and serves consultancy services in addition to its OpenStack-based HP Helion Cloud offering.

Hybrid- und multi-cloud environments shouldn’t be neglected in the future. A hybrid cloud connects a private cloud with the resources of a public cloud. In this case, a company operates an own cloud and uses the scalability and economies of scale from a public cloud provider to get further resources like compute, storage or other services on demand. A multi-cloud concept extends the hybrid cloud idea with the number of clouds that are connected. More precisely, it is about n-clouds that are connected, integrated or used in any form. For example, cloud infrastructures are connected so that the applications can use several infrastructure or services in parallel, depending on the capacity utilization or according to the current prices. Even the distributed or parallel storage of data is possible in order to ensure the availability of the data. It is not necessary that a company connect each cloud that is used to run a multi-cloud scenario. If more than two SaaS applications are part of the cloud environment it is basically already a multi-cloud setup.

At application level Amazon AWS doesn’t offer extensive hybrid cloud functionalities at present but is ever expanding. Google doesn’t offer any hybrid cloud capabilities. Because of public and private cloud solutions Microsoft and HP are able to offer hybrid cloud scenarios on a global scale. In addition, Microsoft has the Cloud OS Partner Network, which enables companies to build Microsoft based hybrid clouds together with a hosting provider. As a German provider T-Systems has the capabilities to build hybrid clouds on a local level as well as on a global scale. Local providers like Pironet NDH are offering hybrid capabilities on German ground.

Legend: Data Privacy and Data Security

Since Edward Snowden and the NSA scandal happened many legends have been created around data privacy and data security. Providers, especially from Germany, advertise with a higher security and protection against espionage and other attacks when the data is stored in a German data center. The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

Data security means the implementation of all technical and organizational procedures in order to ensure confidentially, availability and integrity for all IT systems. Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

Data privacy is about the protection of personal rights and privacy during the data processing. This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers. Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

However, US American cloud players have recognized the concerns of German companies and have announced or started to offer their services from German data centers. Among other Salesforce (partnership with T-Systems), VMware, Oracle and Amazon Web Services. Nevertheless, a German data center has nothing to do with a higher data security. It just fulfills

  • The technical challenges of cloud connectivity (less latency, high throughput and availability).
  • The regulatory framework of the German data privacy level.

Technical Challenge

During the general technical assessment of an IaaS provider the following characteristics should be considered:

  • Scale-up or scale-out infrastructure
  • Container support for better portability
  • OpenStack compatibility for hybrid and multi cloud scenarios

Scalability is the characteristic to increase the overall performance of a system by adding more resources like complete computing units or granular units like CPU or RAM. Using this approach the system performance is capable to grow linear with the increasing demand. So, unexpected load peaks can be absorbed and the system doesn’t break down. Scalability differs scale-up and scale-out. Scale-out (horizontal scalability) increases the system performance by adding complete compute units (virtual machines) to the overall system. In contrast, scale-up (vertical scalability) increases the system performance by adding further granular resources to the overall system. These resources can be storage, CPU or RAM. Taking a closer look on the top cloud applications, these are mainly developed by startups, uncritical workloads or developments from scratch. Attention should be paid to the scale-out concept, which makes it complicated for enterprises to move their applications and systems into the cloud. At the end of the day, the customer has to develop everything from scratch since a non-distributed developed system doesn’t work, as it should run on a distributed scale-out cloud infrastructure.

IT decision makers should consider that their IT architects are detaching from the underlying infrastructure in the future to move applications and workloads over different providers without borders. Container technologies like Docker make this possible. From the IT decision makers point of view thus the selection of a provider that supports e.g. Docker is a strategic tool to optimize modern application deployments. Docker helps to ensure the portability of an application to increase the availability and decrease the overall risk.

Hybrid and multi cloud scenarios are not only a trend but reflects the reality. Cloud provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market. Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver.

Each way is an individual path

Depending on the requirements the way to the holy IaaS grail can become very rocky. In particular, enterprise workloads are more difficult to handle as novel web applications. Regardless of this, it must be considered that applications, which are running on IaaS must be developed from scratch. This depends on the particular provider. But in most cases this is necessary in order to use the specific provider occurrences. Mastering the individual path the following point of views can help:

  • Know and understand the own applications and workloads
  • Perform a data classification
  • Don’t confuse data privacy with data security
  • Evaluate the cloud model: Self-service or managed service
  • Check hybrid and multi cloud scenarios
  • Estimate local and global operating distance
  • Don’t underestimate cloud connectivity
  • Evaluate container technologies for technological liberty of applications
  • Consider OpenStack compatibility

Cloud Market 2015: The Hunger Games are over.

$
0
0

Last year, the cloud market gave us great pleasure with a lot of thrilling news. Lots of new data centers and innovative services show that the topic has been established in the market. The hunger games are finally over. Although, the German market has developed quite slow as compared to international standard. However, an adoption rate of almost 75 percent shows a positive trend – underwritten by two credible reasons. The providers are finally addressing the needs and requirements of their potential customers. At the same time more and more users jump on the cloud bandwagon.

Cloud providers at a glance

In 2015, cloud providers will enjoy a large clientele in Germany. For that, the majority of the providers have strategically positioned themselves with a German data center to empower local customers to physically store their data and to fulfill the requirements of the German Federal Data Protection Act (BDSG).

  • Amazon Web Services made the biggest step from all US American providers. A region especially for the German market shows an acknowledgement of the IaaS market leader to Germany. At the same time Amazon has strategically positioned in central Europe and also enhanced the attraction for customers in adjoining countries. From a technological point of view (reduction of latency etc.) this is not a neglectable step. Services especially for enterprises (AWS Directory Service, AWS CloudTrail, AWS Config, AWS Key Management Service, AWS CloudHSM) show that Amazon has been developed from a startup enabler to a real alternative for enterprises. This Amazon has underwritten with significant German enterprise reference customer (like Talanx, Kärcher and Software AG). However, Amazon still lacks of powerful hybrid cloud functionalities at application level and need to improve. After all, enterprises won’t go for a pure public cloud approach in the future.
  • Microsoft’s “Cloud-First” strategy pays off. In particular, the introduction of Azure IaaS resources was an important step. Besides an existing customer base in Germany, Microsoft has the advantage to support all cloud operation models. Alongside Azure public cloud also hosted models (Cloud OS Partner Network, Azure Pack) as well as private cloud solutions (Windows Server, System Center, Azure Pack) are available, customer can use to build a hybrid scenario. In addition, rumors from 2013 grow stronger that Microsoft will open a German data center in 2015 to offer cloud services under German law.
  • ProfitBricks, one of the few IaaS public cloud providers originally from Germany growth and thriving. Besides a new data center location in Germany (Frankfurt) several new employees in 2014 show that the startup develops well. An update of its Data Center Designer (WYSIWYG editor) underwrites the technology progress. Compared to other IaaS providers like Amazon or Microsoft there is still a lack of a portfolio for value added services. This has to compensate with a convincing and powerful network of partners.
  • Last year Rackspace started to refocus from public IaaS to managed cloud services to bethink itself on one of its strength – the “Fanatical Support”. However, when it comes to trends like OpenStack or DevOps, Rackspace is forward pressing. After all no company can’t afford to focus on this technologies and services in the future in order to offer its developers more liberty to create new digital applications faster and more efficient.
  • At the end of 2014, IBM announced an official Softlayer data center in Frankfurt. As part of the global data center strategy this happened in cooperation with colocation provider Equinix. The Softlayer cloud offers the benefit to provide bare metal resources (physical server) like virtual machines.

Even if the market and the providers made a good progress, Crisp Research has identified following challenges that still need to be addressed (excerpt):

  • The importance of hybrid capabilities and interfaces (APIs) for multi cloud approaches is getting bigger.
  • Standards like OpenStack and OpenDaylight must be supported.
  • Advanced functionalities for the enterprise IT (end-to-end security, governance, compliance et.al.) are needed.
  • There is a big need for more cloud connectivity based on cloud hubs within colocation data centers.
  • Price transparency has to improve significantly.
  • Ease of use needs to have a high priority.
  • Enablement services for the Internet of Things (IoT). Only the ones with an appropriate service portfolio will be in the vanguard in the long term.

Depending on the provider’s portfolio the requirements above are fulfilled partly or predominantly. However, what all providers have in common is how to address their target groups. Historically, direct dealings in Germany are difficult within the IT market. New potential customers are mainly addressed with the aid of partners or distributors.

View of the users

More than 74 percent of German companies are planning, implementing and using cloud services and technologies in their production environments. This is a strong signal that cloud computing has finally arrived in Germany. For 19 percent of German IT decision makers cloud computing is an integral part of their agenda and operations. Another 56 percent of the companies are planning and implementing or using cloud as part of first projects or workloads. Here, hybrid and multi cloud infrastructures play a central role to ensure integration on data, application and process level.

This leads to the question why now – after more than 10 years? After all, Amazon AWS started in 2006 and Salesforce was already founded in 1999. One reason is the fundamentally slow adoption of new technologies – this arises from caution and German efficiency. The majority of German companies are usually waiting until new technologies have been settled and prove the successful usage. Traditionally, early adopters are very few in Germany.

But this is not the main reason. The cloud market had to develop. When Salesforce and later Amazon AWS entered the market not many services were available that fulfilled the requirements or were an equal substitute for existing on premise solutions. For this reason IT decision makers still set on well-tried solutions at that time. In addition, there was no need to change something, which was down to the fact that the benefits of the cloud weren’t clear respectively the providers didn’t clarify it good enough. Another reason is the fact that sustainable changes in the IT industry are happening in decades and not in a couple of years or months. For all those IT decision makers, which set on classical IT solutions during the first two cloud phases, the amortization period and IT lifecycles are ending now. The ones, who have to renew hard- and software solutions today, have cloud services on their list for the IT environments.

Essential reasons that defer cloud transformation are (excerpt):

  • Insecurity due to misinformation from many providers that sold virtualization as cloud.
  • Legal topics had to clarified.
  • The providers had to build trust.
  • Cloud knowledge was few and far between. Lacks of knowledge, complexity and integration problems are still the core issues.
  • Applications and systems have to be developed in and for the cloud from scratch.
  • There were no competitive cloud services from German providers.
  • There were no data centers in Germany to fulfill the German Federal Data Protection Act (BDSG) and other laws.

German companies are halfway through their cloud transformation process. Meanwhile they are looking to multi cloud environments based on infrastructure, platforms and services from various providers. This part of the Digital Infrastructure Fabric (DIF) is the foundation of their individual digital strategy, on which new business models and digital products e.g. for the Internet of Things can be operated.

Study: OpenStack in the Enterprise (DACH Market)

$
0
0

OpenStack is making big headlines these days. The open source cloud management framework is no longer an infant technology only suited to proof of concepts for service providers, academic institutions and other “early users”.

Over the last 12 months OpenStack has gained serious momentum among CTOs and experienced cloud architects. But what about the typical corporate CIO? What are the key use cases, potential benefits and main challenges when it comes to implement OpenStack within a complex Enterprise-IT environment? How far have CIOs and data center managers in the DACH region pushed their evaluation and proof of concepts around the new cloud technology? Where can we find the first real world implementations of OpenStack in the German speaking market?

The given survey presents the first empirical findings and answers to the above raised questions regarding the enterprise adoption of OpenStack. In cooperation with HP Germany Crisp Research has conducted 716 interviews with CIOs from the DACH region across various industries. The interviews were collected and analyzed between July and October 2014.

If you are interested in the executive version of the OpenStack DACH study get in touch with me via my Crisp Research contact details.

Top 10 Cloud Trends for 2015

$
0
0

In 2015, German companies are going to invest around 10.9 billion euro in cloud services, technologies as well as integration and consulting. Although, the German market has developed quite slow as compared to international standard. However in 2015, also this market will mature. The reasons can be find in this article. Crisp Research has identified the drivers behind this development and deducted the top 10 trends of the cloud market for 2015.

1. Cloud Ecosystems and Marketplaces

This year cloud ecosystems and marketplaces are becoming more popular. For some time the Deutsche Telekom Business Marketplace, Deutsche Börse Cloud Exchange or the German Business are present. Service providers are offering marketplaces to increase the scope of their services. However, the buyer side is still not keen. This has several reasons. The lack of integration and less demand are just two reasons. However, along with the cloud market maturity the demand could rise. Cloud marketplaces are part of the logical development of the cloud to give IT buyer a more convenient access to categorize IT resources. Distributors also had understood the importance of cloud marketplaces and are in motion to offer own marketplaces in order to preserve the attraction in the channel. Vendors like the startup Basaas are offering a „Business App Store as a Service“ concept, which can be used to create multi-tenant public cloud marketplaces or internal business app stores.

Integration is a technical challenge and is not easy to solve. However, with a powerful ecosystem of providers under the lead of a neutral marketplace operator, the necessary strengths could be bundle to ensure a holistic integration of services in order to take the biggest burden from the buyer side.

2. Secret Winner: Consultants and Integrators

Complexity. IaaS providers are keeping it a secret. However, for some customers this already ended in a catastrophe. IaaS looks quite simple on paper. But to start a virtual machine with an application on it has basically nothing to do with a cloud architecture. In order to run a scalable and failure-resistant IT infrastructure in the cloud more than administration know-how is necessary. Developer skills and comprehension of the cloud concept are basic skills. Modern IT infrastructures for cloud services are developed like an application. For this purpose, providers like Amazon Web Services, Microsoft Azure, Rackspace or HP are providing building blocks of higher value added services to exactly achieving the scalability and failure-resistance, since this is the responsibility of the customer and not of the cloud provider. ProfitBricks provides “Live Vertical Scaling” setting on a scale-up principle that can be used without special cloud developer skills.

The challenge for a majority of CIOs is that their IT teams lack of the necessary cloud skills or still not enough investments in advanced trainings have been taken. However, this means that a big market (2.9 billion EUR in 2015) opens for consultants and system integrators. But also classical system houses and managed services providers can benefit from this knowledge gap when they are able to transform themselves fast enough. The direkt gruppe and TecRacer are two cloud system integrators from Germany that have impressively shown that they are able to handle public cloud projects.

3. Multi Cloud as a long runner

The multi cloud is an abiding theme. Eventually its importance is propagandizes for years. However, besides the growing demand for cloud services on the buyer side and the increasing maturity level on the vendor side, the area of use for cloud spanning deployments is constantly increasing. This is not only due to offerings like Equinix Cloud Exchange that is enabling direct connections between several cloud providers and the own enterprise IT infrastructure. Based on APIs a central portal can be developed that offers IT buyers a consistent access on IT resources of various providers.

Within the multi cloud context OpenStack and technologies like SaltStack and Docker are playing a central role. The world wide propagation of OpenStack rises continuously. Already 46 percent of all deployments are in production environments – of this 45 percent are on premise private clouds. In Germany also already one third (29.8 percent) of the cloud using companies are dealing actively with OpenStack. In parallel with the increasing importance of OpenStack the relevance of OpenStack for cloud sourcing in the context of multi cloud infrastructure is growing to ensure the interoperability between a various of cloud providers.

To support DevOps strategies and to relinquish writing comprehensive Puppet or Chef scripts, SaltStack is used more often for the configuration management of big and distributed cloud infrastructure. In this context the Docker container wave will grow in 2015. Until December 2014 the Docker Engine was already downloaded 102.5 million times. This is a growth by 18.8 percent within a year. In addition, the team announced extensions for multi container, to support the orchestration of applications across several infrastructures. In the context of container technologies it is worth to take a look at GiantSwarm from Germany. They have developed a micro service infrastructure based on container.

4. Public Clouds are on the rise

In the past public cloud providers faced a barrage of criticism. However, in 2015 they will experience a distinct number of new customers. One reason is the groundwork of necessary requirements they did in the recent past in order to address also enterprise customers. Another reason is the strategic change of managed cloud providers and already cloud transformed system houses with own data centers.

Public cloud players like Amazon AWS or Microsoft Azure massively have made prices spiral downwards. Of course also the customer side have recognized this. Local managed cloud providers (MCP) are getting more and more in a price Q&A with their customers – an unpleasant situation. Virtual machines and storage are sold on a competitive price a small provider is not able to keep up.

The strategies are changing that – for certain situations – MCPs are falling back on public cloud infrastructure to offer their customers lower costs on the infrastructure level. Thus, they have to create partnerships and build knowledge for the respective cloud infrastructure in order to run and maintain the virtual infrastructure and not only offer consulting services. At the same time they also benefit from new functions by the public cloud providers and the global reach. A provider with a data center in a local market is only able to exactly serve this market. However, customers have the demand to enter new target markets without big additional efforts in the short-term. Public cloud provider’s data centers are represented in many regions worldwide and offer exactly these capabilities. MCPs still keep their local data centers to offer customers services referring to local requirements (e.g. legal subjects). In this context hybrid scenarios are playing a major role by which the multi cloud has a priority.

5. Cloud Connectivity and Performance

Because of the continuous shift of mission critical data, applications and processes to external cloud infrastructure leads to the fact that CIOs not only rethink their operational IT concepts (public, private, hybrid) but also have to change their network architectures and connection strategies. A crucial competitive advantage is the selection of the right location. Modern business applications are already provided over cloud infrastructures. From a CIOs today point of view a stable and performant connection to systems and services is essential. This trend will strengthen on and on. Based on direct connect connections like AWS Direct Connect or Microsoft Express Route this can be handled more easily. In this case direct network connections are established between a public cloud provider and an enterprise IT infrastructure in a data center of a colocation provider.

The ever increasing data traffic requires a reliable and in particular stable connectivity in order to get access to the data and information at all times. This becomes more important when business critical processes and applications are outsourced to the cloud infrastructure. The access has to be ensured at any time and with low latency. Otherwise this could lead to essential financial and image damages. The quality of a cloud services significantly depends on its connectivity and the backend performance. Here an essential and important characteristic is the connectivity of the data center to guarantee the customer a stable and reliable access to the cloud services at all time. Data centers are the logistics centers of the future and experience as a logistical data vehicle its heyday.

6. Mobile Backend Development

The digital transformation is affecting each part of our life. Around 95 percent of all smartphone applications are connected to services that are running on servers, which are distributed over data centers worldwide. In addition, without a direct and mostly constant connection these apps are not functional.

This means that modern mobile applications without a stable and global oriented backend infrastructure are not working anymore. This is equally with services in the Internet of Things (IoT). A mix consisting of distributed intelligence on the device and at the backend infrastructure ensures a holistic communication. In addition, the backend infrastructure ensures the holistic connection between among all devices.

For this a public cloud infrastructure provides the ideal foundation. On the one hand the leading providers are offering the global reach. On the other hand they already have ready micro services in their portfolios, which represent specific functionalities that don’t need to be developed from scratch. These services can be used within the own backend service. Other providers of mobile-backend-as-a-services (MBaaS) or IoT platforms have been specialized on the enablement of mobile backend or IoT services. Examples are Apinauten, Parse (now part of Facebook) and Kinvey.

7. Cloud goes Vertical

In the first phase of the cloud providers of software-as-a-service (SaaS) applications concentrated on general respectively horizontal solutions like productivity suites or CRM systems. The needs of single industries weren’t consider very much. One reason was the lack of cloud ready ISVs (Independent Software Vendor), which didn’t find their way into the cloud.

With the emerging cloud transformation of ISVs and the continuous entrance of new vendors the SaaS market growth and with that the offering of vertical solutions tailored for specific industries. Examples are Opower and Enercast in the area of Smart Energy, Hope Cloud for the hotel industry and trecker.com in the agricultural sector.

One example for the importance of verticals is Salesforce. Besides investments in further horizontal offerings Salesforce is trying to make its platform more attractive specifically for single industries like the financial sector or the automotive industry.

8. The Channel to turn on the gas

The majority of the channel has recognized that it needs to demonstrate its abilities in cloud times. First of all the big distributors started initiatives to preserve respectively increase their attractiveness on the customers side (reseller like system houses). 2015 can mark a watershed. At all events a practical test.

The success of distributors is directly connected with the successful cloud transformation of system houses. Many system houses are not able to make this way on their own and need help from the distributors. Different cloud scenarios will show which services are still being purchased from the distributors and which services are directly sourced from the cloud providers.

The whole channel needs to rethink itself and its business model and to align it to the cloud. Except for hard- and software to build private or managed private clouds the access to public clouds via a self-service is a cakewalk. For some target groups the system house and thus the distributor won’t have any relevance. Other customers still need help on their way to the cloud. If the channel is not able to help someone else will do it.

9. Price vs. Feature War

In the past price reductions for virtual machines (VM) and storage hit the headlines. Amazon AWS went first and after a short time Microsoft and Google followed. Microsoft even announced to follow each price reduction by Amazon.

It seems that the providers reached their economic border and that the price war is over for now. Instead features and new services are coming to the fore to ensure differentiation. These include more powerful VMs or the expansion of the portfolio of value-added services. For a good reason – pure infrastructure like VMs of storage are no longer a differentiator in the IaaS market. Vertical services are the future of IaaS in the cloud.

Although the IaaS market is getting its real pace now, however, infrastructure is commodity and doesn’t have much potential for innovation. We have reached a point in the cloud where it is about to use a cloud infrastructure to create services on top of it. Thus, besides virtual compute and storage enterprises and developers need value-added services like Amazon SWF or Azure Machine Learning in order to run the own offering at speed, scale and failure-resistance – and to use it for mobile and IoT products.

10. Cloud Security

The attacks on JP Morgan, Xbox and Sony last year have shown that each company is a potential target for cyber attacks. Whether it is because of fun (“lulz”), financial interests or motivated by political reasons, the potential of threats increases constantly. Here it shouldn’t be neglected that mostly the big cases appear in the media. Attacks on SMEs are unmentioned or worse, the victims didn’t realized it or if too late.

One doesn’t need to be part of the Sony executive board to realize that a successful attack is a big threat! If it’s about the reputation because of stolen customer data or sensitive company information – digital data have become a precious good that needs to be protected. It is just a matter of time until one is getting into the crosshairs of hackers or political motivated extremists and intelligence agencies. This must not happen in 2015. However, the ongoing digitalization leads to a higher connectivity that hacker avails oneself to plan his attacks.

Compared to standard security solutions like firewalls or email security, Crisp Research estimates that more investments in higher-value security services like data leak prevention (DLP) are taking place in 2015. In addition, CISOs have to address strategies to avert DDoS attacks.

Hybrid and Multi Cloud: The real value of Public Cloud Infrastructure

$
0
0

Since the beginning of cloud computing the hybrid cloud is on everyone’s lips. Praised as the universal remedy by vendors, consultants as well as analysts the combination of various cloud deployment models is permanently in the focus during discussions, at panels and conversations with CIOs and IT infrastructure manager. The core questions that needs to be clarified: What are the benefits and do credible hybrid use cases indeed exist, which can be used as best practice guidance notes. This analysis is giving answers to these questions and also describes the ideas behind multi cloud scenarios.

Hybrid Cloud: Driver behind the Public Cloud

Many developers and startups bless the public cloud to escape from high and incalculable upfront costs into infrastructure resources (server, storage, software). Examples like Pinterest or Netflix are showing real use cases and confirm the true benefit. Without the public cloud Pinterest would have never experienced such growth in a short time. Also Netflix benefits from the scalable access to public cloud infrastructure. In the 4th quarter 2014 Netflix has delivered 7.8 billion hours of videos. This is a data traffic of 24,021,900 terabytes of data.

However, what these prime examples are hiding: All of them are green field approaches – like almost every workload that is developed as a native web application on public cloud infrastructure and just represent the tip of the iceberg. However, the reality in the corporate world unveils a completely different truth. Inside the iceberg you find aplenty of legacy applications that are not ready to be operate in the public cloud at the present stage. Furthermore, requirements and scenarios exist for which the use of the public cloud is ineligible. In addition, most of the infrastructure manager and architects know their workloads and its demand very good. Provider should finally accept this and admit that the public cloud in most cases is too expensive for static workloads and other deployment models are more attractive.

By definition, the hybrid cloud sphere of activity is limited to connect a private cloud with the resources of a public cloud. In this case, a company is running an own cloud infrastructure and uses the scalability of a public cloud provider to get further resources like compute, storage or other services on demand. With the rise of further cloud deployment models other hybrid cloud scenarios have been developed that include hosted private and managed private cloud. In particular, for most static workloads – these where the requirements of the infrastructure on average are known – an external static hosted infrastructure fits very well. Variations because of marketing campaigns or the Christmas season – that are occurring periodically – can be compensated by dynamically add further resources from a public cloud.

This approach can be mapped to many other scenarios. In this case, not only pure infrastructure resources like virtual machines, storage or databases must be in the foreground. Even the hybrid use of value added services from the public cloud providers within self-developed applications should be considered, to use a ready function instead of developing it on the own again or benefit from external innovations immediately. With this approach the public cloud offers companies a real value without outsourcing the whole IT environment.

Real hybrid cloud use cases can be find at Microsoft, Rackspace, VMware and Pironet NDH:

  • Microsoft Azure + Lufthansa Systems
    For expanding the internal private cloud and the worldwide datacenter capacities Lufthansa sets on Microsoft Azure. One of the first hybrid cloud scenarios was a disaster recovery concept whereby Microsoft SQL Server databases are mirrored to Microsoft Azure in a Microsoft datacenter. In case of an error within the Lufthansa environment the databases are operated in a Microsoft datacenter without interruption. Furthermore, the own infrastructure resources are extended by Microsoft’s worldwide datacenters to deliver customers a consistent service offering without building own infrastructure resources globally.
  • Rackspace + CERN
    As part of its OpenLap partnership the CERN is using a public cloud infrastructure from Rackspace to get compute resources on demand. This happens typically if physicist needs more compute, as the local OpenStack infrastructure is able to deliver. CERN is experiencing this regularly during scientific conferences when the last data of the LHC and its experiments are being analyzed. Applications with a small I/O rate are well suited to be outsourced to Rackspace’s public cloud infrastructure.
  • Pironet NDH + Malteser
    As part of the “Smart.IT” project Malteser Deutschland sets on a hybrid cloud approach. At this, applications in the own datacenter are combined with communication services like Microsoft Office 365, SharePoint, Lync and Exchange from a public cloud. Applications that are critical in terms of data-protection law – like electronic patient record – are being used from a private cloud in a Pironet datacenter.
  • VMware + Colt + Sega Europe
    As long ago as early 2012 gaming manufacturer Sega Europe sets on a hybrid cloud to give external testers access to new games. Previously this was realized via a VPN connection into the company’s own network. Meanwhile Sega is running an own private cloud to provide development and test systems for internal projects. This private cloud is directly connected with a VMware based infrastructure in a Colt datacenter. Thus, on the one hand Sega can get further resources in order to compensate peak loads from a public cloud. On the other hand the game testers get a special testing area over it. Thus, the testers don’t have to access the Sega corporate network anymore but testing on servers within the public cloud. When the tests are finished the no more needed servers are shutting down by Sega IT without the intervention of Colt.

Multi Cloud: Automotive Industry as the role model

In the course of the continuously propagation of the hybrid cloud also multi cloud scenarios are moving in the focus. For a better understanding of the multi cloud, it helps to consider the supply chain model of the automotive industry as an example. The automaker sets on various (sometimes redundant) suppliers, which provide him with single components, assemblies or ready systems. In the end the automaker assembles the just in time delivered parts within the own assembly factory.

The multi cloud respectively the hybrid cloud are adopting the idea from the automotive industry by working together with more than one cloud provider (cloud supplier) and integrating everything with the own cloud application respectively the own cloud infrastructure in the end.

As part of the cloud supply chain three delivery tiers exist that can be used to develop an own cloud application or to build an own cloud infrastructure:

  • Micro Service: Micro Services are granular services like Microsoft Azure DocumentDB and Microsoft Azure Scheduler or Amazon Route 53 and Amazon SQS that can be used to develop an own cloud native application. Micro Services can also be integrated as part of an existing application, which is running on an own infrastructure and thus is extended by the function of the Micro Service.
  • Module: A Module encapsulates a scenario for a specific use case and thus provides a ready usable part for an application. To these belong e.g. Microsoft Azure Learning Machine and Microsoft Azure IoT. Modules can be used like Micro Services for development purposes respectively for the integration into applications. However, compared to Micro Services they are providing a greater functionality.
  • Complete System: A Complete System is about a SaaS service, thus an entire application that can directly be used within the company. However, it still needs to be integrated with other existing systems.

In a multi cloud model an enterprise cloud infrastructure respectively a cloud application can fall back on more than one cloud supplier and thus integrate various Micro Services, Modules and Complete Systems of different providers. For this model a company develops most of the infrastructure/ application on its own and extends the architecture with additional external services whose effort would be much too big to redevelop it on its own.

However, this leads to higher costs at cloud management level (supplier management) as well as at integration level. Solutions like SixSq Slipstream or Flexiant Concerto are specialized on multi cloud management and support during the usage and management of cloud infrastructure across providers. On the contrary Elastic.io works on several cloud layers, across various providers and supports as a central connector to make cloud integration easier.

The cloud supply chain is an important part of the Digital Infrastructure Fabric (DIF) and should be considered in any case to benefit from the variety of different cloud infrastructure, platforms and applications. The only disadvantage is that the value added services (Micro Services, Modules) named above are still only available in the portfolios of Amazon Web Services and Microsoft Azure. In the course of the rapid development of use cases for the Internet of Things (IoT), IoT platforms and mobile backend infrastructure are taking an ever-growing significance. Ready solutions (Cloud Modules) are helping potential customers to reduce the development effort and giving impulses for new ideas.

Infrastructure providers whose portfolios still focus on pure infrastructure resources like servers (virtual machines, bare metal), storage and some databases will disappear from the screen in the midterm. Only the ones who enhance their infrastructure with enablement services for web applications, mobile and IoT applications will remain competitive.

Video Interview: Benefits of the Hybrid Cloud

$
0
0

At CeBIT 2015 VMware took the chance to catch up with me to explain why the new German vCloud Air datacenter will benefit companies across Europe. I also speak about the three golden rules that should be considered when choosing hybrid cloud services: compatibility, integration and ease of use.


Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal

$
0
0

Along with the steady growth of the cloud the question for appropriate cloud uses cases rises. After companies like Pinterest, Airbnb, Foursquare, Wooga, Netflix and many others have shown how cloud infrastructure and platforms can be used to create new or even disruptive business models, more and more CEOs and CIOs would like to benefit from the cloud characteristics. The issue: established companies run many legacy enterprise applications that cannot be moved into the cloud based on its existing form. For many decision makers this raises the question whether they should follow a top-down or bottom-up strategy.

Cloud Strategy: Top-Down vs. Bottom-Up

An IT strategy has the duty to support the corporate strategy at the best. Thus, in line with the increasing digitalization of society and economy the value proposition and meaning of IT significantly rises. This means that the impact of IT on the corporate strategy will become more important in the future. Assuming that cloud infrastructure, platforms and services are the technological fundament of the digital transformation, it is consequent that the cloud strategy has a direct impact on the IT strategy.

This raises the question how far cloud services are able to support the corporate strategy whether direct or indirect. It is not mandatory that this is reflecting in numbers. If a company – for instance – is able to let its employees work more flexible – based on a software-as-a-service (SaaS) solution – than it has done something for the productivity, which has a positive effect on the company. However, it is important to understand that cloud infrastructure and platforms just serve as a foundation on which companies get the capabilities to create innovation. The cloud is just a vehicle.

Two approaches can be used to get a better understanding about the impact of cloud computing on the corporate strategy:

  • Top-Down Cloud Strategy
    During the top-down approach the possibilities of cloud computing are analyzed and a concrete use case is defined. So an innovation or idea is created that is enabled based on cloud computing. On this basis the cloud strategy is created.
  • Bottom-Up Cloud Strategy
    During the bottom-up approach an existing use case is implemented regarding the possibilities of cloud computing. This means it is analyzed how the cloud can help to support the needs of the use case. Thereof the respective cloud strategy is extrapolated.

Mainly the top-down approach comes up with new business models or disruptive ideas. The development is done on the green field within the cloud and mostly belongs to innovators. The bottom-up approach follows the goal to move an existing system or application into the cloud or redevelop it there. In this case it is mostly about to keep an existing IT resource alive or in best case to optimize it.

Bottom-Up: Migration of Enterprise Applications

Existing companies prefer to follow a bottom-up strategy in order to quickly benefit from the cloud capabilities. However, the devil is in the details. Legacy or classical enterprise applications are not developed to run on a distributed infrastructure – thus a cloud infrastructure. This means that it is not natural for them to scale out and they are only able to scale up at the utmost by using e.g. several Java threads on one single system. If this system fails also the application is not available anymore. Every application that should run in the cloud thus also has to follow the characteristics of the cloud and needs to be developed for this purpose. The challenge: Companies still lack of appropriate staff with the right cloud skills. In addition, companies are intensively discussing “Data Gravity”. This is about the inertia respectively the difficulty of moving data. Either because of the size of the data volume or because of a legal condition that requires to store the data in the own environment.

Vendors have recognized the lack of knowledge as well as the “Data Gravity” and try to support the bottom-up strategy with new solutions. Based on the “NetApp Private Storage” NetApp allows companies to balance the “Data Gravity” between public cloud services and the own control level. Companies have to fulfill several governance and compliance policies and thus have to keep their data under control. One solution is to let the cloud services access the data in a hybrid cloud model without moving them. Thus in this scenario the data is not directly stored in the cloud of the provider. Instead the cloud services are accessing the data via a direct connection when processing them. NetApp enables this scenario in cooperation with Amazon Web Services. For example, Amazon EC2 instances can be used to process the data that is stored in an Equinix colocation data center – the connection is established via AWS Direct Connect.

Another challenge with not cloud ready enterprise applications in the public cloud is the data level – when the data should leave the cloud of a provider. The reason is that cloud native storage types (object storage, block storage) are not compatible with common on-premise storage communication protocols (iSCSI, NFS, CIFS). In cooperation with Amazon Web Services, NetApp Cloud ONTAP tries to find a remedy. As some kind of NAS storage the data is stored on an Amazon Elastic Block Storage (EBS) SSD. In this case Cloud ONTAP serves as a storage controller and ensures the access of not cloud ready enterprise applications to the data. Due to the compatibility to common communication protocols the data can be moved easier.

VMware vCloud Air targets at companies with existing enterprise applications. The vCloud AIR public cloud platform based on the vSphere technology and is compatible to on-premise vSphere environments. So existing workloads and virtual machines can be moved back and forth between VMware’s public cloud and a virtualized on-premise infrastructure.

ProfitBricks tries to support companies with its Live Vertical Scaling concept. In this case a single server can be vertically extended with further resources – like amount of CPU cores or RAM – without rebooting the server. Thus the performance of a running virtual server can be enhanced without making changes to the application. The best foundation for this is a LAMP stack (Linux, Apache, MySQL, PHP) since e.g. a MySQL database recognizes new resources without any adjustments and a reboot of the host system and is able to use the added performance immediately. To make this possible ProfitBricks did modifications at operating system and hypervisor (KVM) level – that are transparent for the user. The customers only have to use the provided reference operating system image that includes the Live Vertical Scaling functionality.

A random bottom-up use case of enterprise applications in the public cloud can be find at Amazon Web Services. However, this example also shows that the meaning of system integrators in the cloud is rising.

  • Amazon Web Services & Kempinski Hotels
    The hotel chain Kempinski Hotels has migrated the majority of its core applications and departments among others finance, accounting and training to the Amazon cloud infrastructure. Together with system integrator Cloudreach a VPN connection was established between the own data center in Genf and the Amazon cloud over which the 81 hotels worldwide are now provided. Furthermore, Kempinski plans to completely shut down the own data center to move 100 percent of its IT infrastructure to the public cloud.

Top-Down: Greenfield Approach

In contrast to just keep enterprise applications the Greenfield approach follows the top-down strategy. In this case an application or a business model is developed from scratch and the system is matched to the requirements and characteristics of the cloud. In this context we are talking about a cloud native application that is considering the scalability and high-availability from the beginning. The application is able to independently start additional virtual machines if more performance is necessary (scalability) respectively to shut them down when they are not needed anymore. It is the same when a virtual machine fails. In this case the application also independently takes care that another virtual machine is starting as a substitute (high-availability). Thus, the application is able to work on any virtual machine of a cloud infrastructure. One reason is that any machine can fail at any time and a substitute needs to be started. In addition, also the data that are processed by an application are not stored at a single location anymore but are stored distributed over the cloud.

Startups and innovative companies are aware of this complexity what the following uses cases display.

  • Amazon Web Services & Netflix
    Netflix is one of the lighthouse projects on the Amazon cloud. The streaming provider is using each characteristics of the cloud what is reflected by the high availability as well as the performance of the platform. As one of the pioneers on the Amazon infrastructure Netflix has developed its own tools – Netflix Simian Army – from the very beginning to master the complexity.

However, the recent past has shown that innovative business models not necessarily have to be implemented on a public cloud and that the greenfield approach not only belongs to startups.

  • T-Systems & Runtastic
    Runtastic is an apps provider for persistence, strength & toning, health & wellness as well as fitness and helps users to reach their health and fitness goals. The company has a massive growth. After 100.000 downloads in 2010 and 50 million downloads in 2013, the number of downloads reached over 110 million until today. Furthermore, Runtastic counts 50 million users worldwide. Basically the numbers speak for an ideal public cloud scenario. However, due to technical reasons Runtastic decided for T-Systems and runs its infrastructure in two data centers in a colocation IaaS hybrid model.
  • Claranet & Leica
    Last year camera manufacturer Leica has launched “Leica Fotopark”, an online photo service to manage, edit as well as print and share photos. Managed cloud provider Claranet is responsible for the development and operations of the infrastructure. “Leica Fotopark” is running on a scale out environment based on a converged infrastructure and a software defined storage. The agile operations model is based on the DevOps concept.

Greenfield vs. Enterprise Applications: The Bottom-Line

Whether a company decides for the top-down or bottom-up cloud strategy depends on its individual situation and the current state of knowledge. The fact is that both variants help to let the IT infrastructure, IT organization and the whole company become more agile and scalable. However, only a top-down approach leads to innovation and new business models. Nevertheless, one has to consider that e.g. for the development and operations of the Netflix platform an excellent understanding for cloud architectures is necessary, which is still few and far between on the current market.

Regardless of their strategy and especially with regard to the Internet of Things (IoT) and the essential Digital Infrastructure Fabric (DIF) companies should focus on a cloud infrastructure. Those offer the ideal preconditions for the backend operations of IoT solutions as well as the exchange for sensors, embedded systems and mobile applications. In addition, a few providers support with ready micro services to simplify the development and to accelerate the time to market. Furthermore, a worldwide spanning infrastructure of datacenters offers a global scalability and helps to expand to new countries quick.

Cloud Marketplace: A means to execute the Bottom-Up Cloud Strategy

$
0
0

Worldwide many CIOs are still looking for the right answer to let their companies benefit from cloud computing capabilities. Basically all kinds of organizations are the right candidates for a bottom-up cloud strategy – the migration of existing applications and workloads into the cloud. However, this approach isn’t much innovative but it offers a relatively high value proposition with low risk. An analysis of market-ready cloud marketplaces show promising capabilities to implement the bottom-up cloud strategy in the near term.

The Cloud drives the evolution of IT purchasing

In the blog post “Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal” two strategy approaches are discussed that can be used to benefit from public cloud infrastructure. One conclusion was that top-down strategies remain reserved for innovators. Bottom-up strategies are mainly realized in the context of existing workloads to move them into the cloud. CIOs of prestigious companies worldwide are still searching for best practices to heave their legacy enterprise workloads to the cloud.

Looking at the general purchasing behavior of IT resources unveils a disruptive change. Besides CIOs, IT infrastructure manager and IT buyers also department managers are asking for their right to be heard or are going their own ways. Driver behind this development: the public cloud. Its self-service leads to a vanishing significance of classical IT purchasing. Obtaining hardware and software licenses from distributors and resellers will become less important in the future. Self-service makes it convenient and easy at once to get access to infrastructure resources as well as software. Thus, distributors and resellers systematically have to rethink their business models. Those system houses and system integrators who still haven’t start their cloud transformation so far are at risk to disappear from the market in the next three to five years. So long!

Besides self-service the public cloud offers primarily one thing: Choice! More than ever before. On the one hand there is the ever-growing variety of sources of supply – solutions of providers. On the other hand the different deployment models the public cloud can be connected with. Hybrid and multi cloud scenarios are the reality.

The next step of evolution is well underway – cloud marketplaces. Implemented by the operators the right way they are offering IT buyers an ideal central platform for purchasing IT resources. At the same time they are supporting CIOs to push their cloud transformation with a bottom-up strategy.

Bottom-Up Cloud Strategy: Cloud Provider Marketplaces support the implementation

The bottom-up cloud strategy helps companies to move existing legacy or enterprise applications into the cloud to benefit from the cloud’s capabilities without thinking about innovations or to change the business model. It is mainly about efficiency, costs and flexibility.

In this strategy approach the infrastructure is not the central point but rather a means to the end. Finally, the software needs to be operated somewhere. In most cases the purpose is to continue to use the existing software in the cloud. At application level cloud marketplaces can support to be successful in the short term. More importantly they are facing current challenges and requirements of companies. These are:

  • The distributed purchase of software across the entire organization is difficult.
  • The demand for accessing software in the short term – e.g. for testing purposes – increases.
  • Individual employees and departments are asking for a catalog of categorized and approved software solutions.
  • A centrally organized cloud marketplace helps to work against shadow IT.

In light of the fact that still a vast number of valid software licenses are used in on premise infrastructure underlines the importance of Bring Your Own License (BYOL). BYOL is a concept by which a company continues using its existing software licenses legally at the cloud provider’s infrastructure.

In respect of supporting the bottom-up cloud strategy by a cloud marketplace, experience has shown that cloud provider owned and operated marketplaces like Amazon AWS Marketplace or Microsoft Azure Marketplace are playing an outstanding role. Both are offering the necessary technology and excellence to make it easy for customers and partners to decide for running applications in the cloud.

Technical advantage, simplicity and most of all an extensive choice are the key success factors of cloud marketplaces of public cloud providers. The AWS Marketplace already offers 2100+ solutions, the Azure Marketplace even 3000+ solutions. Some mouse clicks and the applications are deployed on the cloud infrastructure – incl. BYOL. Thus, the infrastructure is becoming easily usable for all users.

The providers are investing a lot in the development of their marketplaces – with good reason. Amazon AWS’ and Microsoft Azure’s own cloud marketplaces are having a strategic importance. These are the ideal tools to maneuver new customers to the infrastructure and to increase the revenue with existing customers.

Cloud marketplaces operated by public cloud providers are constantly becoming more popular – this with verifiable numbers. The marketplaces are having a big market maturity and are offering a big and wide variety of solutions. Against this backdrop, CIOs who are planning to migrate their existing applications into the cloud should intensively deal with cloud marketplaces. Because these kinds of marketplaces are more than just an add-on – they can support to accelerate the cloud migration.

IoT-Backend: The Evolution of Public Cloud Providers in the Internet of Things (IoT)

$
0
0

The Internet of Things (IoT) has jumbled the agenda of CIOs and CTOs faster than expected and with a breathtaking velocity. As of shortly cloud, big data and social topics occupied center stage. However, in the meantime we are talking more and more about the interconnection of physical objects like human beings, sensors, household items, cars, industrial facilities etc. Who might think that the “Big 4” now disappear from the radar is wrong. Quite the contrary is the case. Cloud infrastructure and platforms belong to the central drivers behind IoT services since they are offering the perfect preconditions to serve as vital enabler and backend services.

Public Cloud Workloads: 2015 vs. 2020

The demand for public cloud services shows an increasing momentum. On the one hand it is due to the requirement of CIOs to run their applications more agile and flexible. On the other hand most of the public cloud providers are addressing the needs of their potential customers. Among the varying workload categories that are running on public IaaS platforms standard web applications (42 percent) still represent the major part. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and IoT services are still playing a minor part.

The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms like AWS, Azure and Co.

Public Cloud Provider: The perfect IoT-Backend

The Internet of Things will quickly become a key factor for the future competitiveness of enterprises. Thus, CIOs have to deal with the necessary technologies to support their enterprise business technology strategy. Public cloud environments – infrastructure (IaaS) as well as platforms (PaaS) offer perfect preconditions to serve as supporting backend environments for IoT services and devices. The leading public cloud providers already have prepared their environments with the key features to develop into an IoT backend. The central elements of a holistic IoT backend are characterized as follows (excerpt):

  • Global scalability
  • Connectivity/ Connectivity management
  • Service portfolio and APIs
  • Special services for specific industries
  • Platform scalability
  • Openness
  • Data analytics
  • Security & Identity management
  • Policy control
  • Device management
  • Asset and Event management
  • Central hub

Public cloud based infrastructure-as-a-service (IaaS) will mainly be used to provide compute and storage capacities for IoT deployments. IaaS provides enterprises and developers inexpensive and almost infinite resources to run IoT workloads and store the generated data. Platform-as-a-service (PaaS) offerings will benefit from the IoT market as they provide enterprises faster access to software development tools, frameworks and APIs. PaaS platforms could be used to develop control systems to manage IoT applications, IoT backend services and IoT frontends as well as to integrate with third party solutions to build a complete “IoT value chain”. Even the software-a-as-service (SaaS) market will benefit from the IoT market growth. User-friendly SaaS solutions will facilitate users, executives, managers as well as end customers and partners to analyze and share the data generated by interconnected devices, sensors etc.

Use Cases in the Internet of Things

digitalSTROM + Microsoft Azure
digitalSTROM is one of the pioneers in the IoT market. As a provider of smart home technologies the vendor from Switzerland has developed an intelligent solution for connecting homes to communicate with several devices over the power supply line via smartphone apps. Lego kind bricks form the foundation. Each connected device can be addressed over a single brick, which holds the intelligence of the device. digitalSTROM early evaluated the potentials of a public cloud environment for its IoT offering. Microsoft Azure provides the technological foundation.

General Electric (GE) + Amazon Web Services
General Electric (GE) has created an own IoT factory (platform) within the AWS GovCloud (US) region to interconnect humans, simulator, products, sensors etc. with each other. The goal is to improve collaboration, prototyping and product development. GE’s decision for the AWS GovCloud was to fulfill legal and compliance regulations. One customer who already profits from the IoT factory is E.ON. When the demand for energy increases in the past GE typically tried to sell E.ON more turbines. In the course of the digital transformation GE early started to change its business model. GE is using operational data of turbines to optimize the energy efficiency by performing comprehensive analyzes and simulation. E.ON gets real-time access to the interconnected turbines to control the energy management on demand.

ThyssenKrupp + Microsoft Azure
Together with CGI ThyssenKrupp has developed a solution to interconnect thousands of sensors and systems within its elevators over the Microsoft Azure cloud. For this purpose they are using Azure IoT services. The solution provides ThyssenKrupp several information from the elevators to monitor the engine temperature, the lift hole calibration, the cabin velocity, the door functionality and more. ThyssenKrupp records the data, transfers it to the cloud and combines it in a single dashboard based on two data types. Alarm signals that indicate urgent problems and events that are only stored for administrative reasons. Engineers get real-time access to the elevators data to immediately make their diagnostics.

IoT-Backend: Service Portfolio and Development Capacities are central

All use cases above show three key developments that determine the next five years and will significantly influence the IaaS market:

  1. IoT applications are a central driver behind IaaS adoption.
  2. Development tools, APIs, and value added services are central decision criteria for a public cloud environment.
  3. Developer and programming skills are crucial.

Thus, several public cloud providers should question themselves whether they have the potential respectively the preconditions to develop their offering further to become an IoT backend. Only the ones who provide services and have development capacities (tools, SDKs, frameworks) in the portfolio will be able to play a central role in the profitable IoT market and being considered as the infrastructure base for novel enterprise and mobile workloads. Note: more and more public cloud infrastructure is used as an enabler and backend infrastructure for IoT offerings.

Various enablement services are available in the public cloud market that can be used to develop an IoT backend infrastructure.

Amazon AWS services for the Internet of Things:

  • AWS Mobile Services
  • Amazon Cognito
  • Simple Notification Service
  • Mobile Analytics
  • Mobile Push
  • Mobile SDKs
  • Amazon Kinesis

Microsoft Azure IoT-Services:

  • Azure Event Hubs
  • Azure DocumentDB
  • Azure Stream Analytics
  • Azure Notification Hubs
  • Azure Machine Learning
  • Azure HDInsight
  • Microsoft Power BI

Amazon AWS didn’t start any noteworthy marketing for the Internet of Things so far. Only a sub website explains the idea of IoT and what kind of existing AWS cloud services should be considered. Even with Amazon Kinesis – predestinated for IoT applications – AWS is taking it easy. However, taking a look under the hood of IoT solutions one realize that many cloud based IoT solutions are delivered via the Amazon cloud.

Microsoft considers the Internet of Things as a strategic growth market and has created Microsoft Azure IoT Services, a specific area within the Azure portfolio. However, so far this is only a best off of existing Azure cloud services that are encapsulating a specific functionality for the Internet of Things.

Public Cloud Providers continuously need to expand their Portfolio

From a strategy perspective IoT use cases are following the top-down cloud strategy approach. In this case the potentials of the cloud are considered and based on that a new use case is created. This will significantly change the ratio from bottom-up to more top-down use cases in the next years. (Today’s ratio is about 10 percent (top-down) to 90 percent (bottom-up)) More and more enterprises will start to identify and evaluate IoT use cases to enhance their products with sensors and machine-2-machine communication. The market behaviors we see for fitness wearable’s (wristbands and devices people are using to quantify themselves) today will exponentially escalate to other industries.

So, the majorities of the cloud providers are under pressure and can’t rest on their existing portfolio. Instead they need to increase their attractiveness by serving their existing customer base as well as potential new customers with IoT enablement services in terms of microservices and cloud modules. Because the growth of the cloud and the progress of the Internet of Things are closely bound together.

API Economy as a competitive factor: iPaaS in the Age of the Internet of Things (IoT) and Multi-Cloud Environments

$
0
0

What do APIs, integration and complexity have in common? All three are inseparable during the growth process of an IT project. Integration projects among two or multiple IT systems often lead to a delay or even the failure of the whole project. Depending on the company size, on-premise environments mostly consist of a relatively manageable number of applications. However, the use of multiple cloud services and the rise of the Internet of Things (IoT) scales to an excess of integration complexity.

The ever-growing use of cloud services and infrastructure across several providers (multi-cloud) makes a central approach necessary to preserve overview. In addition, it is essential to ensure a seamless integration among all cloud resources and the on-premise environment to avoid system and data silos. The variety of cloud services is rising incessantly.

The cloud supports the Internet of Things and its industrial offshoot – the Industrial Internet. Cloud infrastructure and platforms are providing the perfect foundation for IoT services and IoT platforms and will lead to a phenomenal rise of IoT business models. This will end in a market with ongoing new devices, sensors and IoT solutions whose variety and potential cannot be foreseen. However, the demand for integration also increases. After all, only the connection of various IoT services and devices leads to an actual value. At the same time analytics services need access to the collected data from different sources for analyzing and connection purposes.

The access typically happens via the cloud and IoT service APIs. As a consequence the term API economy comes in the spotlight. Integration Platform-as-a-Services (iPaaS) exposed as good candidates to ensure the access, integration, control and management in the cloud and in the Internet of Things.

iPaaS and API Economy: It’s all about the API

Enterprise Application Integration (EAI) was the central anchor in the age of client server communication to ensure business process integration within the whole value chain. The focus is on the tight interaction of a variety of applications that are distributed over several independent operated platforms. The goal: the uniform and integrated mapping of all business processes in IT applications and thus to avoid data silos.

However, the transition into the cloud age leads to a change in the usage behavior of on-premise interfaces to a mainly consumption of web APIs (Application Programming Interfaces). Finally almost each cloud and web services provider offers a REST or SOAP based API that enables to integrate services in the own application and thus benefit directly from external functions. Along with the increasing consumption of cloud services and the ever-growing momentum of the Internet of Things, the importance of APIs will rise significantly.

API Economy

The cloud native Internet companies are reflecting this trend. APIs are a central competitive factor for players like Salesforce, Twitter, Google, Amazon and Amazon Web Services and represent the lifeline of their success. All mentioned providers have created an own API ecosystem around them, which is used by their customers and partners to develop own offerings.

In this context the term “API economy” is used. The API economy describes the increasing economic potential of APIs. Thanks to mobile, social media and cloud services, APIs are no longer popular only under developers but also find their ways on the memos of CEOs and CIOs who have identified the financial impact. Providers typically benefit from APIs by:

  • Selling (premium) functions within a free of cost service.
  • Charging the sharing of content through an application or service of a partner.

CIOs benefit from the API economy by getting access to a quasi endless choice of applications and services they can use to expand their websites, applications and systems without developing, operating or even maintaining these functionalities on their own. Furthermore, APIs enable partner, customers and communities to get an easy access to own applications, data and systems to let the CIO’s company become a part of the API economy.

Everything works using pretended “simple” API calls. However, the devil is in the details. Integration and API management have a big significance in the API economy.

iPaaS = Integration Platform-as-a-Service

Over the last years many vendors have been originated that are specialist on the API management and integration of different services. These, so called, Integration Platform-as-a-Services (iPaaS) are cloud based integration solutions (in pre cloud times known as “middleware”) that support the interaction between several cloud services. Thus, developers and enterprises get the opportunity to create own “integration flows” that connect multiple cloud services among each other but also on-premise applications.

The iPaaS market splits in two camps: The wild startups and the IT majors who have developed respectively rebuild their portfolios. iPaaS vendors to watch are (excerpt):

  • 3scale
    The 3scale platform consists of two areas. The API Program Management gives an overview and information of the used APIs. The API Performance Management analyzes the API traffic in the cloud as well as in on-premise infrastructure. Together they enable to control and manage the API traffic within an own system and application architecture.
  • elastic.io
    The elastic.io iPaaS is offered as a cloud service as well as an on-premise installation in the own infrastructure. Based on the programming languages Node.js, Java and JSON, elastic.io provides a development framework that can be used to integrate several CRM, financial, ERP and ecommerce cloud services to ensure data integrity. Therefore necessary connectors are provided e.g. for SAP, SugarCRM, Zendesk, Microsoft Dynamics, Hybris and Salesforce.
  • SnapLogic
    The SnapLogic iPaaS is provided as a SaaS solution and helps to integrate data of cloud services as well as to let SaaS applications interact among each other and with on-premise applications. Therefore SnapLogic provides ready connectors (Snaps and Snaplex) that can be used for the integration and data processing. The iPaaS provider primarily focuses on the Internet of Things to connect data, applications and devices among each other.
  • Software AG
    The central parts of Software AGs iPaaS portfolio are webMethods Integration and webMethods API-Management. webMethods Integration Backbone integrates several cloud, mobile, social and big data services as well as solutions from partners via a B2B gateway. webMethods API-Management contains all tasks to get an overview and the control of the own and external used APIs. Among other things the functional range includes design, development, cataloging and version management.
  • Informatica
    The Informatica cloud integration portfolio contains a large service offering specifically for enterprise customers. This includes Informatica Cloud iPaaS, which is responsible for the bidirectional synchronization of objects among cloud and on-premise applications as well as the replication of cloud data and the business process automation. The Integration Services support the consolidation of different cloud and on-premise applications to integrate, process and analyze operational data in real-time.
  • Unify Circuit
    Unify Circuit is a SaaS based collaboration suite that combines voice, video, messaging and screen- and file-sharing – everything organized in “conversations”. However, Unify introduced a new PaaS category – cPaaS (Collaborative Platform-as-a-Service). This is an iPaaS that consolidates PBX, SIP as well as external cloud services like Box.com, Salesforce or Open-Xchange into a uniform collaboration platform. All data is stored at the external partners and is consolidated on the Unify Circuit platform at runtime.

IoT and Multi-Cloud: The future belongs to open platforms

Openness is a highly discussed topic in the IT and especially in the Internet. The past or rather Google have taught us: The future only belongs to open platforms. This is not about openness that should be discussed in terms of open standards – even or especially Google runs diverse proprietary implementations, e.g. Google App Engine.

However, Google understood from the very beginning to position itself as an open platform. Important: Openness in the context of providing access to its own services via APIs. Jeff Jarvis illustrates in his book „What Would Google Do?“ how Google – based on its platform – enables other companies to build own business models and mashups. Not without a cause – of course. This kind of openness and the right use of the API economy quickly lead to dissemination and made Google to a cash cow – via advertising.

Companies like Unify are still far away from the status to become comparable with the Google platform. However, the decision makers at Unify apparently realized that only an open architecture approach helps to turn the company from a provider of integrated communication solutions to a cloud integration provider and thus to become a part of the API economy. For this purpose Unify Circuit doesn’t only consolidates external cloud services on its collaboration platform, but rather enables developers to integrate Circuit’s core functions like voice or video as mashups in their own web applications.

From a CIO perspective integration is crucial to avoid system and data silos. A non-holistic integration of multiple and independent systems can harm the overall process. Therefore it is vital that cloud, IoT and Industrial Internet services are seamlessly integrated among each other and with existing systems to completely support all business processes.

Round 11: OpenStack Kilo

$
0
0

The OpenStack community hits round 11. Last week the newest OpenStack release “Kilo” was announced – with remarkable numbers. Almost 1.500 developers and 169 organizations contributed source code, patches etc. Top supporting companies to OpenStack Kilo include Red Hat, HP, IBM, Mirantis, Rackspace, Yahoo!, NEC, Huawei and SUSE. OpenStack Kilo is characterized by a better interoperability for external drivers, supporting new technologies like container as well as bare-metal concepts.

OpenStack Kilo: New Functions

According to the OpenStack Foundation, almost half of all OpenStack deployments (46 percent) are production environments. Network function virtualization (NFV), for using single virtual network components, is the fastest-growing use case for OpenStack. One of the lighthouse projects is eBay, operating OpenStack at large scale.

Essential new functions of OpenStack Kilo

  • OpenStack Kilo is the first release that fully supports the bare-metal service “Ironic” to run workloads directly on physical machines.
  • The OpenStack object storage service “Swift” supports “Erasure Coding (EC)” to fragment data and store it at distributed locations.
  • The “Keystone” identity service was enhanced with identity federation to support hybrid and multi-cloud scenarios.

New features of the OpenStack Core Projects (excerpts)

  • OpenStack Nova Compute
    Improvements for live updates when a database schema is changed and support the change of resources of a running virtual machine.
  • OpenStack Swift Object Storage
    Support of “Erasure Coding”. Temporary access to objects via an URL and improvements for global cluster replication.
  • OpenStack Cinder Block Storage
    Enhancement to attach a volume to multiple virtual machines to implement high-availability and migration scenarios.
  • OpenStack Neutron Networking
    Extension of network function virtualization (NFV) like port security for OpenVSwitch and VLAN transparency.
  • OpenStack Ironic Bare-Metal
    Ironic supports existing virtual machine workloads as well as new technologies like container (Docker), PaaS and NFV.
  • OpenStack Keystone Identity Service
    The extensions around identity federation help to distribute workloads across public and private clouds to build OpenStack based hybrid and multi-cloud environments.

OpenStack Kilo: Short Analysis and Impact

OpenStack is still growing. Even if a high ratio of NFV use cases shows that OpenStack is mainly used in service provider networks to operate single network components more flexible and cost-effective. However, the new Kilo functions for “federated identity”, “erasure coding” and “bare-metal” will move OpenStack up to the top of the CIO agenda.

The support of “erasure coding” is a long overdue function for Swift Object Storage – even though initial discussions already started for the “Havanna” release in 2013. All big public cloud providers are working with this distribution strategy for years to ensure high-availability of data. The introduction of bare-metal is at the right time. Workload migrations to cloud based infrastructure show with increasing frequency that virtual machines are not suitable for all use cases. Thus, database servers and performance intense workloads are ideally running on physical machines, whereas distributed workloads like application and web servers are good candidates for virtual machines. On a final note, identity federation will help CIOs building seamless OpenStack based hybrid and multi-cloud environments. Users only need a single login to authorize across multiple providers and get access to servers, data and applications in private and public clouds at once.

This begs the question how easy and fast CIOs can benefit from these new functions. The last five years unveiled that using OpenStack implicates a high complexity. This is mainly because OpenStack is organized as a big project composed of several sub-projects. Only the close interaction of all necessary sub-projects to support a specific use case is promising. The majority of CIOs who are working with OpenStack are considering a professional distribution instead of building an own OpenStack version based on the source code of the community trunk. In Germany these are 75 percent of the OpenStack users.

Microservice: Cloud and IoT applications force the CIO to create novel IT architectures

$
0
0

The digital transformation challenges CIOs to remodel their existing IT architectures providing their internal customers with a dynamic platform that stands for a better agility and fosters the companies’ innovation capacity. This change calls for a complete rethink of the historically implemented architecture concepts. Even if most of the current attempts are to migrate existing enterprise applications into the cloud, CIOs have to empower their IT teams to consider novel development architectures. Because modern applications and IoT services are innovative and cloud based.

Microservice: Background and Meaning

Typical application architectures, metaphorically speaking, remind of a “monolith”, a big massive stone that is made of one piece. The characteristics of both are the same: heavy, inflexible and not or not easy to modify.

Over the last decades, many, mostly monolithic applications have been developed. This means that an application includes all modules, libraries, and independencies that are necessary to ensure a smooth functionality. This architecture concept implicates a significant drawback. If only a small piece of the application needs to change, the whole application has to be compiled, tested and deployed again. This also implies for all parts of the application that don’t experience any changes. This comes at big costs taking manpower, time and IT resources and in most cases lead to delays. In addition, a monolith makes it difficult to ensure:

  • Scalability
  • Availability
  • Agility
  • Continuous Delivery

CIOs can meet these challenges by changing the application architecture from a big object to an architecture design composed of small independent objects. All parts are integrated with each other providing the overall functionality of the application. The change of one part doesn’t change the characteristics and functionality of other parts. This means that each part works as an independent process, respectively, service. This concept is also known as microservice architecture.

What is a Microservice?

A microservice is an encapsulated functionality and is developed and operated independently. So, it is a small autonomous software component (service) that provides a sub-function within a big distributed software application. Thus, a microservice can be developed and provided independently and scales autonomous.

Application architectures based on microservices, are modularized and thus can be extended with new functionalities easier and faster as well as better maintained during the application lifecycle.

Compared to traditional application architectures, modern cloud based architectures are following a microservice approach. This is because of the cloud characteristics cloud native application architectures have to be adapted to. This means that issues like scalability and high-availability have to be considered from the very beginning. The benefits of microservice architectures are related to the following characteristics:

  • Better scalability: A sub-service of an application is able to scale autonomously if its functionality experienced a higher demand without affecting the remaining parts of the application.
  • Higher availability of the entire application: A sub-service that experiences an error doesn’t affect the entire application but only the functionality it is representing. This means that a sub-failure necessarily doesn’t affect customer-facing functionality if the service represents a backend service.
  • Better agility: Changes, improvements and extensions can be implemented independently from the entire application functionality without affecting other sub-services.
  • Continuous delivery: These changes, improvements and extensions can be conducted on a regular basis without updating the whole application respectively without a major maintenance mode.

Another benefit of microservice architectures: A microservice can be used in more than one application. Developed once it can serve its functionality in several application architectures.

What Provider works with Microservices?

Today, a number of providers already understood the meaning of microservice architectures. However, in particular the big infrastructure players have their difficulties with this transformation. Startups respectively cloud native companies show how it works:

  • Amazon Web Services
    From the very beginning Amazon AWS aligned its cloud infrastructure providing microservices (building blocks). Examples: Amazon S3, Amazon SNS, Amazon ELB, Amazon Kinesis, Amazon DynamoDB, Amazon Redshift
  • Microsoft Azure
    From the very beginning the cloud platform consists of microservices. The Azure Service Fabric exists for a short time offering capabilities for the development of own microservices. Examples: Stream Analytics, Batch, Logic App, Event Hubs, Machine Learning, DocumentDB
  • OpenStack in general
    The community extends the OpenStack portfolio with new microservices with each release mainly for infrastructure operations. Examples: Object Storage, Identity Service, Image Service, Telemetry, Elastic Map Reduce, Cloud Messaging
  • IBM Bluemix
    IBM’s PaaS Bluemix provides an amount of microservices. These are offered directly by IBM or via external partners. Examples: Business Rules, MQ Light, Session Cache, Push, Cloudant NoSQL, Cognitive Insights
  • Heroku/Salesforce
    Heroku’s PaaS offers “Elements”, a marketplace for ready external services that can be integrated as microservices in the own application. Examples: Redis, RabbitMQ, Sendgrid, Raygun.io, cine.io, StatusHub
  • Giant Swarm
    Giant Swarm offers developers an infrastructure for the development, deployment and operations of microservice based application architectures. For this purpose, Giant Swarm is using technologies like Docker and CoreOS.
  • cloudControl
    cloudControl’s PaaS offers “Add-ons”, a marketplace to extend self-developed applications with services from external partners. Examples: ElephantSQL, CloudAMQP, Loader.io, Searchify, Mailgun, Cloudinary

The providers, based on their microservice portfolios, are offering a programmable modular construction system of ready services that are accelerating the development of an application. These are ready building blocks (see hybrid and multi-cloud architectures), whose functionalities don’t have to be developed again. Instead they can be used directly as a “brick” within the own source code.

Example of a Microservice Architecture

Netflix, the video on demand provider, is not only a cloud computing pioneer and one of the absolute role models for IT architects. Under the direction of Arian Cockroft (now Battery Ventures) Netflix has developed an own powerful microservice architecture to operate its video platform high scalable and high available. Services include:

  • Hystrix = Latency and fault tolerance
  • Simian Army = High-availability
  • Asgard = Application deployment
  • Exhibitor = Monitoring, backup and recovery
  • Ribbon = Inter process communication (RPC)
  • Eureka = Load balancing and failover
  • Zuul = Dynamic routing and monitoring
  • Archaius = Configuration management
  • Security_Monkey = Security and tracking services
  • Zeno = In-memory framework

All microservices, Netflix encapsulates within its “Netflix OSS” that can be downloaded as open source from Github.

An example from Germany is Autoscout24. The automotive portal is facing the challenge to replace its 2000 servers that are distributed over 2 data centers, and the currently used technologies based on Microsoft, VMware and Oracle. The goal: a microservice architecture supported by a DevOps model to implement a continuous delivery approach. Thus, Autoscout24 wants to stop its monthly releases and instead provide improvements and extensions on a regular basis. Autoscout24 decided for the Amazon AWS cloud infrastructure and already started the migration phase.

Microservice: The Challenges

Despite the benefits, microservice architectures come along with several challenges. Besides the necessary cloud computing knowledge (concepts, technologies, et.al.) these are:

  • A higher operational complexity since the services are very agile and movable.
  • An additional complexity because of the development of a massive distributed system. This includes latency, availability and fault tolerance.
  • Developer need operational knowledge = DevOps
  • API management and integration play a major role.
  • A complete end-to-end test is mandatory.
  • Ensuring a holistic availability and consistency of the distributed data.
  • Avoiding a high latency of the single services.

The Bottom Line: What CIOs should consider

Today, standard web applications (42 percent) still represent the major part at public IaaS platforms. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and Internet of Things (IoT) services (3 percent) are still playing a minor part. The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms.

These influences are challenging CIOs to rethink their technical agenda and thinking about a strategy in order to enable their company to keep up with the shift of the market. Therefore they have to react to the end of the application lifecycle early enough by replacing old applications and look for modern application architectures. However, a competitive advantage only exists if things are done differently from the competition and not only better (operational excellence). This means that CIOs have to contribute significantly by developing new business models and develop new products like an IT factory. The wave of new services and applications in the context of the Internet of Things (IoT) is just one opportunity.

Microservice Architecture: The Impact

Microservice architectures support IT departments to respond faster to the requirements of specialty departments to ensure a faster time-to-market. In doing so, independent silos need to be destroyed and a digital umbrella should be stretched over the entire organization. This includes the introduction of the DevOps model to develop microservices in small and distributed teams. Modern development and collaboration tools are enabling this approach for worldwide-distributed teams. This helps to avoid shortage of skilled labor at certain countries by recruiting specialist from all over the world. So, a microservice team with the roles product manager, UX designer, developer, QA engineer and a DB admin could be established across the world, which accesses the cloud platform via pre-defined APIs. Another team composed of system, network and storage administrators operates the cloud platform.

Decision criteria for microservice architectures are:

  • Better scalability of autonomous acting services.
  • Faster response time to new technologies.
  • Each microservice is a single product.
  • The functionality of a single microservice can be used in several other applications.
  • Employment of several distributed teams.
  • Introduction of the continuous delivery model.
  • Faster onboarding of new developers and employees.
  • Microservices can be developed more easily and faster for a specific business purpose.
  • Integration complexity can be reduced since a single service contains less functionality and thus less complexity.
  • Errors can be isolated easier.
  • Small things can be tested easier.

However, the introduction of microservice architectures is not only a change on the technical agenda. Rethinking the enterprise culture and the interdisciplinary communication are essential. This means that also the existing IT and development teams needs to be changed. This can happen either by internal trainings or external recruiting.

Guidance: The Internet of Things Stack for CIOs

$
0
0

CEOs and CIOs who consider to enter the Internet of Things (IoT) market need to understand, which capabilities are waiting for them and at which level they are keen to play along. In addition, develop an IoT solution means to research lots of providers and and suppliers in order to find the appropriate partner for the needs. Crisp Research’s IoT stack distinguishes the most important players in the Internet of Things categorized by IoT platform providers and IoT product vendors.

The IoT platform providers are split into the categories IoT back-end and IoT enablement. Crisp Research classifies the different providers as follows:

  • IoT back-end providers: IT infrastructure is the foundation for the deployment of IoT application and services. Cloud platforms are playing a dominant role. Popular IoT back-ends are Amazon Web Services, Cisco IoT Cloud Connect or Microsoft Azure.
  • IoT enablement and middleware providers: This group includes providers of middleware who combine and integrate data as well as providers of analytics solutions who analyze and visualize data. Relevant players in this area are not only traditional IT providers like IBM, Intel and SAP but also several startups like Splunk and Parstream and industrial companies like Bosch SI and GE Software.
  • IoT network and connectivity providers: In order to establish a secure and powerful connection between the physical and digital world and to bind sensors over several communication and network standards, network and telecommunication providers are playing an elementary role in the IoT value chain.
  • IoT integrators and consultants: This providers support companies with consulting during the conception, implementation and operation of IoT services and applications. They need to have process and industry know-how as well as experience with IoT projects.
  • IoT solution and service providers: IoT product vendors represent different kind of companies producing IoT devices (wearables, sensor systems), IoT services for end users (smart home, fitness, self tracking) up to industrial specific solutions (industrial internet).
  • IoT users: IoT users are divided into the categories industrial Internet, consumer IoT and government IoT. Industrial Internet includes e.g. smart power grids, connected mobility and smart logistic. Consumer IoT includes wearables, smart home and self-tracking solutions. Government IoT contains solutions healthcare, public security and military.

CIO at the Crossroads: The Enterprise IT as a Digital Factory

$
0
0

The progressive movements of the digital transformation as well as new megatrends like the Internet of Things (IoT) let CIOs finally reach a crossroads. Even if statements like “IT is a business enabler” were ignored as “one of these phrases” in the past, the reality brings CIOs back down to earth. No doubt, the CIO still has a pivotal role. However, he has to meet the challenge to understand the enterprise IT as a service provider for its internal customers to enable the company to reach external customers with new digital and hybrid products. This only works if he considers the “Digital Enterprise” en bloc and restructures the enterprise IT to a “Digital Factory”.

Yesterday’s IT: Keep Things Running

Over the last three decades, IT departments worldwide have developed, introduced, updated and detached a rash of IT systems. They have digitized their companies by introducing and maintaining ERP and CRM systems, office solutions and self-developed applications. So far, they didn’t have awarded another significance. The IT department was just the maintenance of the “IT engine room”, no colleague wanted to deal with respectively was able to deal with.

Today, suddenly everything is completely different. Due to the “Consumerization of IT” and the easy access to IT resources, every employee is now able to use an iPhone or a SaaS application. It is even worse, suddenly everybody is bawling for the digital transformation. In most IT departments there is a lack of understanding for this. Transforming digital? Eventually, digital systems have been introduced and maintained over the last 30 years. Indeed, the term “digital transformation” is a little bit confusing, especially if one is working in IT for decades and has seen all developments. The digital transformation describes the radical shift of a company to an entire interconnected digital organization. Based on new technologies and applications more and more processes and process elements are reshaped and are adjusted to the requirements (real-time, connected) of the digital economy. So, it is about the tight integration of entire process and supply chains within the company as well as with partners, suppliers and customers. Finally, it is about a closer customer relationship and an optimized understanding of the customer by serving a better customer experience. Thus, the digital transformation influences customer and business relationships and changes respectively create new value chains. Among this influence, companies have the chance to develop new business models.

So, the IT department has much more responsibility as just keeping the status quo. The IT needs to understand themselves as a strategic partner and business enabler and tightly engage among different departments to understand their needs and requirements. In the digital age and during the digital transformation this can become a strategic competitive advantage for the company.

This is also the overall feedback from German companies. The results of Crisp Research’s “Digital Business Readiness” study has shown that the majority of the interviewed companies see their own IT department as a strategist (34 percent) respectively an ideas provider (21 percent) in the context of their digital transformation.

Thus, the expectations are high, which is strengthening by the fact that more than half of the respondents (58 percent) understand the digital transformation as an IT paradigm. So, IT departments and CIOs are under pressure to act as an enabler and ideas provider for digital processes and ways of working within other departments – with good reason. Hidden champions even global leaders can be found in several industries of Germany’s economy. However, especially these companies should spend very much attention to the digital change in order to retain their competitive and innovation capabilities in the future or ideally reinforce them.

The Internet of Things and the Industrial Internet are both an allegory for the digitization of all industries. New smart products will be developed and existing “analog” devices will be enhanced with sensors, thus “smart” extended and admitted into the digital value chain. Existing organizational and IT structures don’t support IT departments any longer to make the digital mind shift in good time and to react with innovative ideas to the requirements of corporate divisions to proactively serve the customer with new products and solutions. The digital transformation requires a rethinking and a radical change within IT departments. One step of this transformation is the change from an enterprise IT to a digital factory.

From an Enterprise-IT to the Digital Factory

CIOs don’t create any direct value for their companies if they just setup and operate an IT environment with standard applications. In this case they are just the supporting force in the background without having any influence on the business model respectively the business success. Those CIOs who have developed own applications in the past and thus already participated in the business success belong to the innovators of the IT guild. However, also they have to rethink their approaches. Because applications and other IT solutions have been completely focused on static processes in the past. Today’s customer expectations, new business models as well as solutions for the Internet of Things are following a dynamic behavior (so to say in real-time) and thus need to be considered in all processes and the user experience.

In order to handle the challenges on the technical side, IT departments should stop thinking in silos and instead start to transform themselves into a corporate “digital factory”.

In the center of the digital factory is a cloud based IT environment that is representing the “digital power plant” powered by infrastructure-as-a-service (IaaS) and/or platform-as-a-service (PaaS). The power plant hosts an “Application Platform” where applications are developed and operated as well as an “Analytics Engine” for the analysis and preparation of data. The resulting insights are accessible for the applications running on the application platform. The application platform can be used to develop completely new products or to digitize existing products, for example, to extend it with a sensor or a smart unit. The digital power plant is supplied with existing internal and/ or digital external resources like data or cloud services. Thus, a hybrid environment is created. The digital or digitized products like mobile apps, SaaS or IoT solutions and engagement solutions that have been created in the power plant provide the digital factory continuously with data, which leads to an ongoing improvement of the products respectively to new products. The integration of the digital factory with typical enterprise business solutions like ERP or CRM systems should also be highly considered to create a value from the existing data and to interact better and more predictive with existing and new customers.

As part of the business enablement, the digital factory should also provide the internal customers with an on demand self-service to serve them with the necessary resources like compute power, storage, microservices, development platforms or other SaaS solutions exactly in the moment when they are acting. This supports them to improve their productivity. Therefore, a cloud environment is the perfect foundation for a digital factory.

The Digital Factory is the Foundation of the Digital Enterprise

In the digital age the IT department is claimed to deliver an essential part to the product and support during its development, enhancement as well as the process optimization. This is the fundamental basis to enable a company serving its customers with the best user experience and to provide them with innovative and superior services. Therefore a radical approach is necessary. This means to bring oneself into question and rethink everything. The old entrenched structures don’t work within a digital company anymore. Modern companies are mostly technology companies, independent from their industry, supported by IT. A clear defined cloud strategy helps CIOs to build the foundation for the digital enterprise. This enables their internal and external customers to experience a convenient and especially faster access to IT resources like dynamic infrastructure, platforms and other cloud services and thus improves the overall productivity and the customer experience.

The digital factory is the foundation for novel digital and hybrid products – e.g. for the Internet of Things – as well as to develop services and prototypes more efficient.

The IT department needs to enhance from a maintainer to a product center. This includes developing new products and extending existing products with digital resources. The digital factory is the technological foundation for this transformation.

Best practice examples show that decision makers have recognized the importance.

  • Klöckner & Co. (CEO, Gisbert Rühl) a steel and metal trader drives its digital transformation with a new detached team. Therefore the center of excellence “kloeckner.i” was founded, which is promoting the digitization together with colleagues from other corporate divisions. The classical IT at Klöckner & Co. is standardized. In its digital factory “kloeckner.i” most of the things are self-developed to manufacture individual products in order to constantly answer the question: “What does the customer need to collaborate easier and more efficient with Klöckner & Co.”.
  • Volkswagen AG (Head of Application Development, Ralf Bunken) considers IT as a central element during the development of a car. Volkswagen specifically analyses data to enhance products and to make its business processes – from development over manufacturing up to sales – more efficient. However, just a digital factory is not enough. Thus, Volkswagen bets on the close and interdisciplinary collaboration across several departments and experts from different disciplines, partly in cooperation with IT suppliers. For example, the responsibility for technology and software in the “Connected Car” project lies in the hands of the automotive developers. The IT is responsible for all things outside the car. However, both areas are working closely together. Volkswagen encourages its approximately 10,000 IT employees by hosting internal hackathons and offering “Data Labs” where each employee can try new things.

Introducing the digital factory requires much more than just a technological restructuring. CIOs should also consider the following topics:

  • Consider the digital transformation as a traversing umbrella across the entire company.
  • Destroy silos and promote interdisciplinary ways of working and acting.
  • Consider the DevOps model and microservice architectures to promote the continuous delivery of IT solutions.
  • Promote and establish developer know-how within the IT department and further adjoining corporate divisions.
  • Consider the API economy as a competitive advantage to increase the customer engagement.

OpenStack Acquisitions: The sellout runs at full speed

$
0
0

This year OpenStack celebrates its fifth birthday. Despite its young past the open source project already caused quite a stir, driven by a solvent ecosystem consisting of the who is who of the cloud and open source industry. This has a positive affect on the market maturity. Linux needed around 10 years and thus twice as long to achieve similar market relevance. The OpenStack Foundation has learned from the older brother and does a lot of things right. One signal for the importance and market maturity of OpenStack is the acquisition behavior. The sell out runs at full speed.

OpenStack acquisitions over time

After only five years, the OpenStack community already looks back at a proud number of acquisitions. The year 2014 can be regarded as the starting shot when the first big IT vendors started their shopping tour to snap the strategic most important OpenStack startups for their portfolios.

  • April 2014: Red Hat buys Ceph storage vendor Inktank and transfers the product as „Red Hat Ceph Storage“ in the own portfolio.
  • Juni 2014: Red Hat buys OpenStack integration service provider eNovance to increase its consulting competency for its customers.
  • September 2014: Cisco buys OpenStack-as-a-Service provider Metacloud and transfers the product as „Cisco OpenStack Private Cloud“ in the own portfolio.
  • Oktober 2014: EMC buys the OpenStack distribution Cloudscaling and will release an own OpenStack product called „Caspsian“ in November 2015 that is based on the Cloudscaling technology.
  • Juni 2015: Cisco buys OpenStack pioneer Piston Cloud Computing to strengthen its Intercloud initiative.
  • Juni 2015: IBM buys OpenStack private cloud provider BlueBox to expand its hybrid cloud offering.

During this wave of mergers the big vendors have acquired most of the hitherto most promising OpenStack startups. This shows that OpenStack has made it on top of the strategic agendas of the big players and underwrites its importance. The vendors expand their portfolios but also want to become an inherent part of the OpenStack market. In addition, they buy todays rare OpenStack knowledge from the market.

However, one attractive target is still not taken: Mirantis. Even if the self-appointed pure play OpenStack vendor is today’s highly profiled acquisition candidate. With good reason, Mirantis is offering an own distribution (Mirantis OpenStack), a hosted private cloud (Mirantis OpenStack Express) as well as consulting services (Mirantis Consulting) and training services (Mirantis Training). So, a quite appealing target for the prospective buyer to make the portfolio OpenStack ready in all aspects and that at a single blow.

The shopping tour of the big vendors speaks for their commitment and the progressed maturity stage of OpenStack. Indeed, a good sign for the further development of the open source project. However, it seems that the OpenStack ecosystem more and more is getting under the control of the big players.

AWS Summit Berlin 2015: Germany embraces the Public Cloud

$
0
0

The spell is apparently broken. The public cloud model is massively gaining ground in Germany. Without using the public cloud most of the German companies will struggle to play a significant role in the future market of the Internet of Things (IoT) and kicking off their digital transformation. However, it looks like that the topic has arrived at some German executive floors. The AWS Summit Berlin 2015 was a good indicator for this.

It is October 7th 2010, Amazon CTO Werner Vogels welcomes 150 developers at “Kalkscheune” in Berlin. Family environment, no partners, no booth, some snacks and drinks. It is the first AWS cloud computing event in Germany and quasi the very first AWS Summit in Germany ever. Speakers from Moviepilot, Cellular, Plinga and Schnee von Morgen are talking about their experiences with Amazon Web Services.

On June 30th 2015, almost five years later, Werner Vogels is again on stage, again in Berlin, this time at the “CityCube”, in front of over 2,000 attendees, in front of developers and decision makers. Big booths, conference food, 32 partners and 45 sessions distributed over 9 tracks. All this shows the enormous growth of the AWS Summit Berlin and reflects the interest of German IT users in the public cloud and the Amazon Web Services (AWS).

Germany has become one of the growth engines for AWS. According to the German country manager Martin Geier, the cloud region in Frankfurt (consisting of two data centers), AWS has opened in October 2014, is the fastest growing international AWS region ever!

Innovation: 1.170 new Functions and Services in 7 Years

The growth in the German market stands symbolic for AWS global growth. According to AWS, already over 1 million active customers of different sizes and from various industries are using the public cloud infrastructure. This includes 3,600 customers from the education sector and over 11,200 non-profit organizations. In order to expand the customer base in the German startup scene a partnership with Rocket Internet was established. Rocket has committed recommending all prospective startups to run their infrastructure and applications at AWS. In addition, existing startups are advised to think about moving to AWS. AWS next top target customer group is the public sector (schools and public authorities). For this purpose, the Summit hosted a public sector track for the first time. This is an important strategic step for AWS. If AWS is able to gain a foothold into one German government authority this would be a precedent that could encourage other public customer to follow.

The growth on the customer side can also be seen on the technical level. The incoming and outgoing data transfer of Amazon S3 has been increased by 102 percent in the last year. The usage of Amazon EC2 instances increased by 93 percent.

In addition, AWS operates 11 cloud regions consisting of 30 Availability Zones (AZ). One region consists at least of two AZs, one AZ consists of one or several data centers. Furthermore, 53 edge locations exist to deliver data to customers in single local markets quicker. The 12th cloud region opens in India in 2016.

Besides the advantage to be the first infrastructure-as-a-service (IaaS) provider on the market, especially two factors lead to the enormous head start: the service portfolio and the speed of innovation.

  1. Instead of just providing pure infrastructure, AWS has a huge portfolio of microservices that is helping customers to use the infrastructure gainful by developing web applications and backend solutions on top of it. At the same time the infrastructure platform serves as a technical enabler for new business models.
  2. AWS is more innovative than any other cloud provider. In the last seven years 1,170 new functions and services have been released, 516 of them only in 2014. AWS has more functionality than any other infrastructure provider.

AWS stands for the public cloud. The provider and CTO Werner Vogels again made this very clear. The private cloud has no space in the world of AWS. The hybrid cloud is just considered as the journey but not the ultimate destination. Therefore, customers are only provided with elementary solutions and services (Amazon VPC, AWS Direct Connect) for hybrid cloud scenarios and it is likely to remain like that. Customers who got used to it are Netflix, Kempinski Hotels, GPT, University of Notre Dame, Emdeon, Intuit, Infor, Splunk, Tibco and Informatica. These companies are “All-in” with the public cloud and AWS.

Corporate Customers discover the Public Cloud

Companies of all size are able to benefit from using the public cloud. Startups have the advantage to start with a greenfield approach und doesn’t drag any legacy along. They can grow slowly without making heavy investments in IT resources in the very beginning. Existing companies need one particular thing in this day and age: speed, in order to keep pace with the fast changing market conditions. An idea is just the beginning. However, a fast go-to-market mostly fails due to the technical execution because of not existing IT resources like modern tools and services that are significantly helping for the development.

AWS counts the who is who of the startup scene as its customers. Now it is about to navigate more corporate customers to the infrastructure. Unilever, Qantas, Dole, Netflix, Norvatis or Nasdaq are already big international customers on the list. In Germany after Talanx and Kärcher finally a heavyweight was presented at the AWS Summit 2015, Audi.

Audi

Audi decided for AWS in the context of its new mobility program “Audi on Demand” to provide customers with individual services. The reasons for this are requirements for a frictionless 24/7 operations as well as the capabilities for global scale in order to reach out to global customers quick and to store data in the local markets. Therefore, Audi is using several AWS services and functions such as Virtual Private Cloud, a multi availability concept and Amazon EC2. One decisive detail: Audi has transformed the organizational structure from a hierarchy model to a fully meshed model and built everything around IT.

Zalando

Zalando said good-bye to its own data center and moved the IT infrastructure of its ecommerce shops into the AWS cloud. This was for strategic reasons to promote the innovation and creativity of the company. For this purpose, Zalando is empowering its employees acting autonomously to provide the necessary IT infrastructure quicker than previously. In doing so, Zalando is able to deliver its customers new functions quicker. However, Zalando’s example shows how important a cultural change is if innovation should be promoted supported by the cloud. Therefore the company is orientating on something it is calling “Radical Agility”. This is about e.g. the organization and architecture that is required to give own teams the capabilities for more freedom to learn and in this context allowing them to make mistakes. In the end, this is the only way to understand how to develop massive complex applications.

Zanox

In a personal briefing Sascha Möllering, lead engineer at Zanox AG, talked about his experiences of using the AWS cloud. Möllering is significantly responsible building the virtual IT infrastructure and the development of the backend service that is used by the Zanox affiliate marketing network. Zanox operates an own data center infrastructure in Berlin to serve the European market. However, the number of customers in Brazil is constantly increasing. Zanox depends on a global scale to provide the customers in this market with a good performance. The challenge is the high latency to deliver data as fast as possible. This is the main reason Zanox decided for AWS. The cloud region “Sao Paulo” offers Zanox three availability zones and four edge locations in Brazil. For this purpose, Möllering developed a native AWS application but only focus on using core services in order trying to avoid an AWS service lock-in. Because Zanox doesn’t want to relinquish the own data center infrastructure in Berlin and connects AWS in a hybrid model. In addition, Möllering plans in the case of an AWS error and has considered a multi region scenario respectively a multi cloud scenario. Therefore he has developed an own module that is implementing the APIs of Amazon Kinesis, Microsoft Azure Service Bus as well as Apache Kafka to make sure not losing the incoming data stream and thus not losing data on no account.

Public Cloud has arrived in Germany

The AWS Summit in Berlin was a good indicator that more and more German companies are discovering the public cloud. Conversations with customers, partners and system integrators show a good progress. Audi, one of the leading German car manufacturers from one of the oldest industries, has shown that it has realized the capability and necessity of the public cloud.

The trust in the public cloud is growing and the time is playing for the public cloud providers and against the German companies. The one who is not transforming digitally will sooner or later disappear from the market. The one who wants to keep pace with digital transformation (e.g. Internet of Things, smart products, ecommerce etc.) is not able to avoid using the public cloud, its services, modern development tools as well as the global scalability.

AWS is one of the public cloud providers who can help during this transformation process. However, AWS is complex! It is complex during the setup as well as the operation and administration of the virtual infrastructure and thus also with regard to the development of web applications and backend services. No discussion! The conversation with Zanox has clarified this once again and confirmed several conversations with other AWS interested parties. Moreover, the necessary cloud knowledge is still limited and it will take some time until it has broadly arrived.

CIOs can generate a sustainable business value with OpenStack! Here’s how it works.

$
0
0

OpenStack is on everyone’s lips and is slowly working it’s way forward into the IT infrastructure of German companies. 58 percent of German CIOs see in OpenStack a true alternative to commercial cloud management solutions. However, IT decision makers should look closely what they are holding in their hands. OpenStack is basically just an infrastructure management solution and accomplishes no direct value to the business success. However, this is exactly what CIOs have to deal with in order to position themselves as a business enabler. This raises the question how OpenStack is able to create a significant added value, stepping out of the shade of a simple open source cloud management software.

CIOs have to state the fundamental question, how OpenStack can provide a strategic advantage. This is only the case if they use OpenStack different from their competition and thus not only restrict oneself to operational excellence. It is rather about to understand the OpenStack technology as part of their IT strategy using it to create a real value for the company.

Playground: Enablement Platform for Developers

CIOs who want to achieve a business value with OpenStack need to see more potential in OpenStack than just a pure management solution for their cloud infrastructure. In this case it is much more than just cost savings and running an IT infrastructure. CIOs have to understand OpenStack as an enablement platform for their developers and use the open source solution exactly in this way. CIOs, who only want to focus on operational excellence, can use one of the several standard OpenStack distributions. However, these ones should consider, that they are just one of many. For those, who see a strategic vehicle in OpenStack, a distribution is a good foundation to extend OpenStack and expand it to an enablement platform.

OpenStack as enablement platform means that developers are provided with much more than just compute (virtual machines), storage or databases. It is about providing higher-value services, which can be found in the portfolios of Amazon Web Services and Microsoft Azure. This is about microservices that support the application development. These are ready building blocks whose functionalities don’t have to be developed again. Instead they can be used directly as a “brick” within the own source code. The OpenStack community has recognized the importance and slowly tries to take parts of the AWS respectively Azure service portfolio over to OpenStack. First services are Sahara (Elastic Map Reduce) and Zaqar (Multiple Tenant Cloud Messaging). Of course, this is by far not enough and further microservices are needed to make OpenStack to a powerful enablement platform.

However, one thing shouldn’t be underestimated for this scenario: the significant investments that are necessary. Building a massive scalable and globally available cloud infrastructure respectively enablement platform like the ones of Amazon Web Services or Microsoft Azure is complex as well as costly. However, this must not prevent CIOs from running an own OpenStack based platform. For this purpose e.g. partnerships with hosting providers suit to run OpenStack in different deployment models. The German research paper “Managing OpenStack: Heimwerker vs. Smarte Cloudsourcer” describes what kinds of variants are suitable.

Despite the high complexity of OpenStack it is possible for CIOs to make use of its openness and flexibility to build custom solutions based on the various sub-projects.

Innovation: Leave the Community

The common work within a community is important to push a project like OpenStack successfully forward. In addition, all involved parties benefit from the ideas other community members. However, the big disadvantage is that one member is just as good as the community itself. The community concept neither works to diversify towards the competition. In the end the focus is again completely on operational excellence. A technological or strategic advantage cannot be achieved.

The cloud market shows that solo efforts are part of a concept for success. Providers like Amazon Web Services (AWS), Microsoft Azure, ProfitBricks in Germany or CloudSigma in Switzerland have built individual infrastructure environments. This strategy has helped AWS to achieve a massive advantage in technology within the cloud market.

Another paragon in the open source space is Canonical. The Linux distributor is well-known for using the Linux open source code but only giving back a little (e.g. patches). It is the same behavior for the OpenStack project, proven by numbers (see http://stackalytics.com). Idealists my have a problem to understand Mark Shuttleworth’s (Founder of Canonical) attitude. However, in the end it is about the business and thus to operate in the black.

OpenStack is perfect for CIOs to serve as a foundation for their cloud infrastructure. For this purpose, standard OpenStack services like compute, storage or identity management can be used. However, in the end it is about one thing: individuality! This means that a CIO should say good-bye to the community concept in order to focus on innovation. The strategy is about to look what the OpenStack community has to offer, adopt the necessary services and develop individual enablement services and solutions on top of it. The use of standard OpenStack is not sufficient to differentiate from the competition neither to create a serious value for the company.

Business Value: Applications and Services support new Business Models

The Internet of Things (IoT) is the next megatrend rolling towards companies forcing them to finally start with their individual digital transformation. OpenStack is a powerful tool and a global standard CIOs can use to support this individuality. However, OpenStack is just a means to an end and only offers the basic functionality so far. But, based on the open source approach OpenStack can be completely customized for the own necessities and thus is the perfect foundation for individual backend solutions to support mobile and IoT applications.

For this purpose, CIOs should stand back from standard OpenStack, use a distribution as a foundation and extend it with individual functionalities. This means that they have to get used to OpenStack’s complexity. However, it is exactly this technological excellence that will pay off over time to differentiate from the competition in order to obtain a technical advantage and thus create an added value.

Public Cloud Providers in Germany: Frankfurt is the Stronghold of Cloud Data Centers

$
0
0

In the beginning, US public cloud players didn’t care much about Germany. However, in the meantime more and more providers are bustling on the German market. Above all, Frankfurt emerged as the Mecca of cloud data centers.

The top 5 reasons of Germany’s attractiveness are:

  • a stable political situation.
  • a central location in Europe.
  • high data privacy laws.
  • a geographical stable situation.
  • a high economic performance (fourth largest national economy in the world and number one in Europe).

Frankfurt is the Cloud Hub in Germany

Data centers are experiencing their heyday. The “logistic centers of the future” are coming to the fore as never before and provide the digital backbone of the digital transformation. With good reason. In the course of the last decade more and more data and applications have been moved to IT infrastructure of global distributed data centers. The significance of data centers as well as IT infrastructure as a logistic data vehicle is no accident. Also German companies have recognized this. More than two-thirds (68 percent) see data centers as the most important building block of their digital transformation.

In the recent 20 years a cluster of infrastructure providers has been established in Frankfurt that helps companies of the digital economy to position their products and services in the market. These providers have shaped Frankfurt and its economy and deliver integration services for IT and networks as well as data center services. More and more public cloud providers have recognized that they have to be on site in the local markets and thus near their customers – despite the inherently global character of a public cloud infrastructure. This is an important insight. No provider who wants to make serious business in Germany can relinquish a local data center location.

The research paper “The significance of Frankfurt as a location for Cloud Connectivity” is dealing with the question, why Frankfurt is the preferred location for cloud data centers.

Public Cloud Providers and their Data Centers in Germany

A view on the most important public cloud providers for the German and European market shows that already half of them has decided for Frankfurt as their preferred data center location. That a German data center pays off is proved by Amazon Web Services (AWS). According to the German country manager Martin Geier, the cloud region in Frankfurt (consisting of two data centers), AWS has opened in October 2014, is the fastest growing international AWS region ever. In addition, the AWS region has helped to quicken the German cloud market. On the one hand, AWS customers are welcoming the opportunity to physically store their data in their own country. On the other hand, AWS efforts also help local providers like ProfitBricks who indirectly profit from constantly getting new customers.

  • Despite being a vigorous follower of Amazon Web Services, Microsoft has yet not managed to open a datacenter in Germany. This is hard to explain, considering that Microsoft has had its own legal entity in the German market for a long time and should be well familiar with the concerns and requirements of the German enterprise customer. Microsoft’s efforts in the cloud are considerable and display a clear trajectory. Certainly, Microsoft would be able to finally convince its German customers and meet their requirements only through establishing a local datacenter. Rumors are spreading that Microsoft will open one in Q2 2016. One possible strategy could be to partner up with a large local partner, similar to the joint efforts of Salesforce and T-Systems. At the moment, Microsoft relies on technology partnerships (Cloud OS Partner Networks) with local managed service providers, who build their own Azure-based cloud environments based on the Azure Pack.
  • At CeBIT, VMware announced officially the General Availability (GA) of its German datacenter. The outlook for the technology provider is generally good. On the one hand, a large part of on-premise infrastructure is already virtualized via VMware technologies. On the other hand, businesses are searching for ways to migrate their existing workloads (applications, systems) to the cloud without facing too much hassle and making many modifications. Indeed, even when VMware’s own public cloud offering focuses on standardized workloads, it still competes directly with a range of its partners (cloud hosting providers, managed service providers), who have built their offerings using VMware technologies as a foundation.
  • The American provider Digital Ocean has had its own datacenter in Germany since April 2015. Digital Ocean is a niche provider to keep a close watch on. It targets mostly developers and not so much enterprise customers. Moreover, if Digital Ocean seriously wants to stand in the ring against Amazon Web Services, then the company must offer more than boring SSD cloud servers and a couple of applications to its customers.
  • Rackspace is not yet represented by a local datacenter in the German market; yet its business is expanding into the DACH market, where Germany is of strategic importance. A local datacenter would certainly underline the commitment. Rackspace could have winning cards as a managed cloud service provider, because the majority of German businesses are already busy with managed cloud services.
  • On account of the partnership with T-Systems, Salesforce will likely establish a seat in Frankfurt. The datacenter’s opening is announced for August 2015.
  • At present, one should not expect Google to open a datacenter in Germany. This speaks for Google’s attitude to determine the rules of the game and concentrate on itself, rather than on the needs, challenges and concerns of its customers. This is reflected widely through the Google Cloud Platform. The requirements of the corporate customers have to this date not been considered by Google.

Moreover, it is worth noting that the lower costs and the innovation capabilities of public cloud providers put a continuously rising pressure on managed service providers with own datacenters. The providers are now forced to change their strategies and become managed service providers who operate infrastructure for public cloud environments. This means that the providers manage their customers’ applications and systems from cloud infrastructure such as that of Amazon’s AWS. Only specific, mission-critical workloads remain in the providers’ own datacenters.

Frankfurt’s leads in density of datacenters and interconnected Internet exchange points not only in Germany but also Europe-wide. The continuous motion of data and applications on the infrastructure of external providers has made Frankfurt the citadel of cloud computing in Europe. Strategic investments from colocation providers such as Equinix and Interxion have emphasized the significance of the location.

As most of the relevant public cloud providers have already found their own place in Frankfurt, Crisp Research envisions an important trend for the next couple of years – an increasing number of international providers will build their public cloud platforms in Frankfurt and will respectively continue to expand and develop them there.

Viewing all 87 articles
Browse latest View live




Latest Images