Quantcast
Channel: Analyst POV
Viewing all 87 articles
Browse latest View live

Analyst Comment: The importance of cloud marketplaces (Video)

$
0
0

Cloud marketplaces gain in importance. Like a webshop they bundle software and services of several providers. Why the screening by a marketplace operator is an advantage and what must be improved in the future is discussed in this video.

Source: Crisp Research


Analyst Interview: The Internet of Things (Video)

$
0
0

In 2025, an estimated 30 billion of interconnected devices will be circulating. The Internet of Things, the technology-driven trend, offers a range of opportunities for CIOs to open new IT-based business portfolios.

Source: Crisp Research

Analyst Report: CIO’s OpenStack Dilemma – BUY or DIY?

$
0
0

OpenStack has quickly become an important factor in the cloud infrastructure business. Started in 2010 as a small open source project, the solution is used by over a hundred companies and organizations around the world. Among others big enterprises (PayPal, Wells Fargo, Deutsche Telekom) as well as innovative cloud startups and developer. In the early years OpenStack was used by its initiators to build own proprietary cloud environments. More than 850 companies support the project, including IBM, Oracle, Red Hat, Cisco, Dell, Canonical, HP and Ericsson.

Although OpenStack is an open source solution, this does not imply, that the setup, operations and maintenance is easy to handle. OpenStack can act like a real beast. A number of CIOs, who operate own developed OpenStack infrastructures, are reporting of a significant rise of costs and complexity. To customize OpenStack for their individual requirements they made several delicate adjustments. As a result, they have developed OpenStack implementations which are no longer compatible with the current releases. This leads to the question if a “build” or “buy” strategy is the right approach to deploy OpenStack in the corporate IT environment.

My Crisp Research colleague Dr. Carlo Velten and I had a critical look on this topic and answer the key questions for CIOs and IT decision makers in our Analyst Report „Der CIO im OpenStack Dilemma: BUY oder DIY?“.

The Analyst Report „Der CIO im OpenStack Dilemma: BUY oder DIY?“ can be downloaded under http://www.crisp-research.com/report-der-cio-im-openstack-dilemma-buy-oder-diy/.

Analyst Interview: Disaster Recovery (Video)

$
0
0

Disaster Recovery is playing a minor part for SMEs. In a fraction of a second, this negligent attitude can cause harm to the business. Thereby new operation models from the cloud-age offers no reason for excuses.

Quelle: Crisp Research

Public PaaS doesn’t fire: The German market sends cloudControl into exile in the United States

$
0
0

Docker Inc. severs its platform as a service (PaaS) problem child dotCloud and sells it to cloudControl Inc., the US subsidiary of cloudControl GmbH from Berlin, Germany.

At present, Docker focuses on making noise. The container technology stands for the future of modern applications and multi-cloud deployments and is supported by many big names including Amazon Web Services, Google and Microsoft. Founded in 2010 as dotCloud Inc., the company developed the first polyglot PaaS. As part of the further development, Docker was created and later released in March 2013. In October 2013, the company was renamed Docker Inc. to fully concentrate on developing the Docker technology.

With about 500 customers within four years one shouldn’t speak about a dotCloud success story, which belongs to bigger competitors like Heroku (now part of Salesforce), EngineYard, Microsoft Azure or the Google App Engine. After the terrific success of the Docker container it was therefore a wise strategic step to focus on further development and find an exit for the PaaS business. In cloudControl, a grateful buyer has been found.

cloudControl to focus on the US market

cloudControl is the very first – and until today one of the very few – public PaaS provider in Germany. Founded in 2009, the Berlin based company sets on Amazon Web Services (EU region, Ireland) as its infrastructure foundation. The company positioned itself as a rock-solid European PaaS provider. Over the five years on the market cloudControl has managed to feed 20 employees, register 50,000 developers on its platform, and acquire 500 paying customers. It generated 1.45 million EUR in 2013. [1] Thus, an average customer spends around 240 EUR per month. For a German public cloud provider this is a considerable amount; yet, it is not possible to replicate this across the international arena.

Even if cloudControl has been technically developed from a pure PHP-PaaS to a polyglot platform with a stable ecosystem during the course of recent years, the revenue exemplifies the stagnant situation of Public PaaS in Germany. The Berlin-based company already recognized this in July 2013 by launching the so-called “Application Lifecycle Engine”. This is an extraction of the technical foundation of their PaaS platform. The engine can be used by enterprises to build a private PaaS or by Hosting providers for white-label PaaS solutions.

This idea could do little for the general increase of the popularity level of cloudControl. According to a study by Crisp Research with 83 German ISVs surveyed, the experience and use compared to other PaaS providers on the market are relatively low. And this, despite cloudControl being the only significant public PaaS provider in Germany.

With the acquisition of dotCloud, cloudControl “flees” to the US market to start its second attempt. The number of customers has doubled to 1000. All dotCloud employees will stay with Docker. For now, the dotCloud technology will not be touched and will be integrated into the cloudControl platform over time. This is to ensure that the existing dotCloud customers remain loyal to the platform.

Public PaaS has a minor importance in Germany

The cloudControl dilemma reflects the fundamental attitude against the public cloud in Germany. Even the infrastructure as a service (IaaS) market struggles. Spending of 210m EUR on IaaS and a 38 percent usage of public cloud services in 2013 say enough.

The statements of those polled ISVs in the study draw a clear picture. Asked for the most favored operations model for using PaaS services as part of the development processes, “only” 21 percent supported the public cloud model, while 12 percent would decide for internal operations as part of a private PaaS platform. The majority of more than 60 percent would use PaaS services for development & testing as part of a hosting model.

Regarding application operations, the polled ISVs are even more ambitious. Only 11 percent would endorse operations in a public cloud environment. The majority (38 percent) see Hosted PaaS as a favored operations model to run applications in the cloud. A further 30 percent would select the more dedicated alternative (“Hosted Private PaaS”) as their preferred model. Over one-fifth of the polled ISVs would operate their applications in a private PaaS environment.

Germany is a tough market

To give cloudControl the solitary blame for its difficult situation is too easy. In fact, the entire situation of the German market, especially on the cloud topic, can be held accountable. The subsidiaries of the US flagship cloud provider also have to deal with this every day. However, there are five main reasons for cloudControl’s slow gain of momentum in Germany so far:

  • cloudControl was on the market very early. Moreover, as a startup the company needed to deal with the mammoth task to promote the topic cloud and PaaS in the innovation-unfriendly Germany.
  • The actual market growth for PaaS and cloud in general in Germany is still to come.
  • The German market and cloud have a complicated relationship and (especially) public cloud services face a tough act with the decision makers.
  • Platform as a service has a very high need of explanation.
  • cloudControl is a typical developer company, where only a little weight was placed on marketing and PR to increase the level of awareness.

For cloudControl, the acquisition of dotCloud definitely makes sense and is a purchase at eye level. Both companies are showing the same technological DNA. However, the challenge for cloudControl will be the integration of the platforms and the processes, especially when it comes down to the migration of the existing dotCloud customers and their applications, respectively, in order not to lose anyone.

The dotCloud acquisition enables cloudControl for immediate growth of the customer base and gives the Berlin based company an entry to the US PaaS market. Even if cloudControl gets into the ring with the same dotCloud competitors Heroku, EngineYard etc., the US market entry must be seen as an opportunity. Eventually, the total market offers enough space for more than one PaaS provider. And maybe in the near future cloudControl can say: „So Long Germany, and Thanks for All the Fish“. The company would deserve it.

[1] http://www.wiwo.de/erfolg/gruender/startup-der-woche-cloudcontrol-verdoppelt-sein-geschaeft/10280388.html

OpenStack Deployments Q3/2014: Private Clouds are far ahead – Hybrid Cloud is rising

$
0
0

The momentum of the open source cloud infrastructure software OpenStack remains strong. Together with the continuous improvement of the technology (current release: Icehouse), the adoption by enterprise customers is characterized by very healthy growth rates. This is clear from the worldwide growth of the OpenStack projects. Although the number of new projects in Q3 went up by only 30 percent as compared to Q2, which exhibited an impressive 60 percent growth over Q1, the appeal is still unabated.

Here, on-premise private clouds are by far the preferred deployment model. In Q2 2014, the OpenStack Foundation counted 85 private cloud installations worldwide. In Q3, the number already grew to 114. By comparison, the number of worldwide OpenStack public clouds grew by a mere 7 percent, falling steeply from the 70 percent jump just a quarter before. With 57 percent, hybrid cloud projects reveal the biggest growth. For the next 12 months, Crisp Research expects a 25 percent growth for OpenStack based enterprise private clouds.

– –
* The numbers base on the official statistics of the OpenStack Foundation.

Get in the lead – OpenStack is an important strategic investment for HP

$
0
0

Since the release of the HP Helion Cloud, HP’s whole cloud computing portfolio is using OpenStack as the foundation for both public and private cloud offerings. Since the first release “Austin”, HP has been an active OpenStack contributor. The company committed 13 percent of the code to “Austin“ and throttled it down for “Bexar” and “Cactus”. With “Diablo,” the momentum got bigger but was still under 4 percent.

“Essex” could be seen as a turnaround in the strategy to see OpenStack as a more important project. HP contributed 7 percent of the whole code for the Essex release. For the next release “Folsom” (16 percent), the pace was accelerated and for “Grizzly” it increased by one percent to 17 percent.

The contribution decreased for “Havanna” (16 percent) and “Icehouse” (11 percent). But since the development started for OpenStack “Juno”, HP has become the number one contributor. Compared to the current “Icehouse” release, the code contribution already rose by 9 percentage points for “Juno” (20 percent), which will be released in October 2014.

Driving force for a good reason

With respect to the overall OpenStack contribution, HP has so far delivered 13 percent of the OpenStack code. This makes HP the third largest code contributor after Rackspace (18 percent) and Red Hat (16 percent).

HP has a good reason to become the driving force behind OpenStack. Since the whole cloud computing portfolio completely depends on OpenStack, HP must increase its OpenStack influence. This can be accomplished through acquiring a seat in the OpenStack Foundation (HP is already a Platinum Member) as well as through development of own code and ideas. HP already has an advantage stemming from its substantial experience in infrastructure management and software.

HP is a part of the project since the first OpenStack release. The HP Cloud (former Helion Cloud) is also based on OpenStack. But the numbers clearly speak for themselves. HP finally recognized OpenStack as an important strategic investment and is doing well by raising the investments in OpenStack. For the next 12 months, Crisp Research expects a 25 percent growth for OpenStack-based enterprise private clouds. Here, HP can surely play a leading role.

– –
* The numbers are based on official statistics from the OpenStack Foundation.

The pitfalls of cloud connectivity

$
0
0

The continuous shift of business-critical data, applications and processes to external cloud infrastructures is not only changing the IT operating concepts (public, private versus hybrid) for CIOs but also the network architectures and integration strategies deployed. As a result, the selection of the location to host your IT infrastructure has become a strategic decision and a potential source for competitive advantage. Frankfurt is playing a crucial role as one of the leading European cloud connectivity hubs.

The digital transformation lashes out

The digital transformation is playing an important part in our lives today. For example, an estimated 95 percent of all smartphone apps are connected to services hosted on servers in data centers located around the world. Without a direct and uninterrupted connection to these services, metadata or other information, apps are not functioning properly. In addition, most of the production data, needed by the apps, is stored on systems in a data center, with only a small percentage cached locally on the smartphone.

Many business applications are being delivered from cloud infrastructures today. From the perspective of a CIO a reliable, high-performance connection to systems and services is therefore essential. This trend will only continue to strengthen. Crisp Research estimates that in the next five years around a quarter of all business applications will be consumed as cloud services. At the same time, hybrid IT infrastructure solutions, using a mix of local IT infrastructure connected and IT infrastructure located in cloud data centers, are also becoming increasingly popular.

Data volumes: bigger pipelines required

The ever-increasing data volumes further increases the requirement of reliable, high-performance connectivity to access and store data and information any place, any time. Especially in case business-critical processes and applications are located on cloud infrastructure. For many companies today, failure to offer their customers reliable, low-latency access to applications and services can lead to significant financial and reputation damage, and represents a significant business risk. Considering that the quality of a cloud services depend predominantly on the connectivity to and performance of the back end, cloud connectivity is becoming the new currency.

Cloud connectivity is the new currency

“Cloud connectivity” could be defined technically as latency, throughput and availability:

In simple terms, cloud connectivity could be defined as the enabler of real-time access to cloud services any place, any time. As a result, connectivity has become the most important feature of today’s data centres. Connectivity means the presence of many different network providers (carrier-neutrality) as well as a redundant infrastructure of routers, switches, cabling, and network topology. CIOs are therefore increasingly looking at carrier-neutrality as a pre-requisite, facilitating a choice between many different connectivity options.

Frankfurt is the perfect example of a cloud connectivity hub

In the past 20 years a cluster of infrastructure providers for the digital economy has formed in Frankfurt, which facilitates companies to effectively and efficiently distribute their digital products and services to their customers. These providers have crafted Frankfurt as a the German capital of the digital economy delivering a wide range of integration services for IT and networks, and IT infrastructure and data centre services. More and more service providers have understood that despite the global nature of a cloud infrastructure, a local presence is crucial. This is an important finding, as no service provider that is seriously looking to do business in Germany, can do so without a local data center. Crisp Research predicts that all major, international service providers will build or expand their cloud platforms in Frankfurt within the next two to three years.

Against this backdrop, Crisp Research has researched the unique features of Frankfurt as an international hotspot for data centres and cloud connectivity. The whitepaper, titled “The Importance of Frankfurt as a Cloud Connectivity Hub” is available for download now: http://www.interxion.com/de/branchen/cloud/die-bedeutung-des-standorts-frankfurt-fur-die-cloud-connectivity/download/


Disaster Recovery for SMEs: No apologies!

$
0
0

Disaster Recovery currently plays a minor role in the strategic planning and daily routine of SMEs. Yet this negligent attitude can cause harm to the business in a fraction of a second. With the advent of new operational models in the cloud age, there is no longer room for excuses.

A disaster recovery strategy is absolutely essential

The digital transformation tsunami will also hit SMEs. When the main shift will strike is individual and not 100 percent predictable. However, one thing is for certain – the cloud, social, mobile and big data quake has already erupted. The mid-market businesses will feel the altered reality with greater intensity, as they are generally unable to ensure the high availability of business critical IT services and applications.

IT security and emergency plans in companies are still an annoying and expensive prevention procedure to secure the mission-critical digital assets and to keep the company alive in exceptional circumstances. When it hits the fan, all ways and means are set in motion to initiate appropriate rescue operations and life-saving procedures. But at this point, it is usually too late and the struggle for survival begins. Technical shortcomings and human errors in IT systems will lead to helplessness and long production environment downtimes when no solid disaster recovery measures have been instituted prior to the incident. Business operations can be affected for days, leading to high internal costs as well as substantial damages to one’s image. This is a fatal and negligent behavior. Today, no company can get away by leaving its partner, supplier or customer without access to its own IT infrastructure over a long period of time.

Those looking for the culprits only in the IT departments are searching at the wrong hierarchy level. Administrators and CIOs have the disaster recovery topic on their agenda but have their backs up against the wall. On the one hand, they are incapable of action since they have no budget for it. On the other hand, in the case of exceptional circumstances they put their heads on the line. In IT, the incremental ROI from deploying a disaster recovery strategy is not easy to measure and, hence, it has low priority for decision makers. For this reason, insufficient budgets for a true disaster recovery strategy are provided. At the same time, emergency plans based on a classical backup data center setup are very complex, as well as cost-intensive and time-consuming.

A further challenge can be found on the architectural level. Cloud models are changing the way applications are developed, operated and provided. Applications are no longer only operated on the own, on-premise infrastructure, but can also reside in a hybrid model, distributed across one or multiple cloud or hosting providers. The multi-cloud concept is more ubiquitous than one expects. This trend will strengthen in the coming years and have a direct impact on disaster recovery strategies that aim the recovery of information and applications in cases of emergency, without time and data loss.

From hardware to service: The metamorphosis of disaster recovery

At present, a good portion of the IT budget in companies is spent on storage and server systems as well as software solutions to implement own backup concepts. In most cases, today’s approaches do not consider any procedure for a 24×7 recovery of data and systems and include only a simple backup and RAID architecture. Comprehensive disaster recovery scenarios are very complex. Because of hybrid application architectures and distributed data management, this complexity will increase further over the next years.

This development will reach companies in the mid-term, having an impact on the spending ratio for hardware and software, as well as on services and audits. From 2014 to 2018, Crisp Research expects a significant shift of cost pools for IT resources in these product categories.

This shift of the cost structure is due to the following influences:

  • New operating models (e.g. Cloud Computing)
  • Virtualization
  • Standardization
  • Converged Infrastructure

Based on these influences, companies can build and provide new operational concepts, which are less complex and cheaper but now with the same effectiveness that historically only big enterprises have been able to establish and use. This leads to a clear decline in technology investment (hardware/ software) and redistribution of investment to the other areas over the next few years. Hence, the spending on services (cloud and managed services) as well as in the areas of audits/ compliance rises. Audits, in particular, will acquire a higher importance. On the one hand, this is due to the steadily changing laws to meet compliance standards. On the other hand, it results from the growing effort to control disaster recovery providers with respect to compliance with legal guidelines (Period for data storage, data management in accordance with law, etc.).

Requirements of SMEs on modern disaster recovery solutions

The market for disaster recovery solutions develops continuously. In this context, three requirements that affect the market sustainability stand out:

  1. Cost efficiency
    Technology expenses must decrease significantly in order to lead to cost reductions and utilize different disaster recovery areas (services, audits) more cost-efficiently. In this context, reducing maintenance and physical resource costs must also be considered. The disaster recovery solutions are used as a service where the systems are predominantly located on the provider side. On the customer site, only marginal consolidated hardware-/ software resources are needed.
  2. Reducing complexity
    When it comes down to implementation, operations and maintenance, measures for a disaster recovery strategy are multi-faceted. In addition, the complexity through new cloud and architectural concepts rises to ensure the DNA as well as the 24/7-availability of distributed data and applications. New disaster recovery solutions should decrease the complexity on the customer site by moving the essential tasks that are encapsulated within on-premise appliances to the provider’s site.
  3. Disaster recovery scenarios across locations
    New operational models for productivity environments require a change of thinking for SMEs in the process of creating a disaster recovery strategy. By the usage of hybrid-/ multi-cloud infrastructures, cloud-services and the usage of colocations, the DNA for the case of a disaster must be updated to ensure a fail-safe operation for mission-critical applications and data. Only in this case, the consistency of all productivity systems can survive the whole overall lifecycle, and current as well as prospective business processes are optimally sheltered.

Rather drive with a spare tire on board

A disaster recovery strategy is comparable with a spare tire of a car. As a rule you don’t need it. But what if you get stuck on the road in a thunderstorm…

With the growing importance of the digital economy and the increasing growth of digitalization, no company can afford to relinquish a disaster recovery. The deployment of services and applications in real-time as well as the increasing requirements, reliability and the guarantee of as-short-as-possible time to restore services are the essential drivers behind this development. For this reason, a disaster recovery strategy must necessarily be part of the business strategy and get higher priority at executive decision-making levels.

Modern disaster recovery solutions embarrass companies who hide behind excuses and, therefore, drive an automated emergency management. With clearly calculable effort and less complexity, SMEs are now able to obtain the same disaster recovery effectiveness, which had formerly been reserved only for big enterprises, and also at acceptable cost.

In the future, modern disaster recovery-as-a-service solutions will support a growing number of businesses in implementing a disaster recovery strategy with little effort and less cost. For this purpose, the market will offer different approaches and solutions that address varying needs and requirements.

Top 15 Open Source Cloud Computing Technologies 2014

$
0
0

Open source technologies have a long history. Linux, MySQL and the Apache Web Server are among the most popular and successful technologies brought forth by the community. Over the years, open source experienced a big hype which, driven by developers, moved into corporate IT. Today, IT environments are no longer conceivable without open source technologies. Driven by cloud computing, open source presently gains strong momentum.

Several projects launched in the recent past have significantly influenced the cloud computing market, especially when it comes to the development, setup and operations of cloud infrastructure, platforms and applications. What are the hottest and most important open source technologies in the cloud computing market today? Crisp Research has examined and classified the “Top 15 Open Source Cloud Computing Technologies 2014″ in order of significance.

OpenStack to win

Openness and flexibility are among the top five reasons for CIOs during their selection of open source cloud computing technologies. At the same time, standardization becomes increasingly important and serves as one of the biggest drivers for the deployment of open source cloud technologies. It is for a reason that OpenStack qualifies as the upcoming de-facto standard for cloud infrastructure software. Crisp Research advises to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. Especially in the areas of openness and efficiency, open source makes a significant contribution. With this in mind, CIOs set the stage for the implementation of multi-cloud and hybrid cloud/infrastructure scenarios and assist the IT department in the introduction and enforcement of a holistic DevOps strategy. DevOps, in particular, plays a crucial role in the adaptation of Platform-as-a-Service and the development of applications for the cloud and leads to significant speed advantages, which also affect the competitive strength of the business.

The criteria for assessing the top 15 open source cloud computing technologies include:

  • Innovation and release velocity
  • Development of the community including support of large suppliers
  • Adoption rate of innovative developers and users

In consulting projects Crisp Research especially identifies leading users who are using modern open source technologies to run their own IT environments efficiently and future oriented in different scenarios.

The “Top 5 Open Source Cloud Computing Technologies 2014″:

  1. OpenStack
    In 2014, OpenStack already is the most important open source technology for enterprises and developers. Over 190 000 individuals in over 144 countries worldwide already support the infrastructure software. In addition, its popularity among IT manufacturers and vendors increases steadily. OpenStack serves a continuously increasing number of IT environments as a foundation for public, private and managed infrastructure. Organizations in particular have utilized OpenStack for their purposes to build own private clouds. IT providers like Deutsche Telekom (Business Marketplace) use OpenStack to build their cloud platforms. Today only few developers have direct contact with OpenStack. However, the solution has a high importance for them since platforms like Cloud Foundry or the access to container technologies like Docker are often delivered via OpenStack. In other cases, they directly access the OpenStack APIs to develop their applications directly on top of the infrastructure.
  2. Cloud Foundry
    In the growing platform-as-a-service (PaaS) market, Cloud Foundry gets in a leading position. The project was initialized by Pivotal, a spin-off by EMC/ VMware. Cloud Foundry is mostly used by organizations to deploy a private PaaS environment for internal developers. Managed service providers use Cloud Foundry to offer PaaS in a hosted environment. The PaaS project plays perfectly together with OpenStack to build highly-available and scalable PaaS platforms.
  3. KVM
    KVM (Kernel-based Virtual Machine) is the preferred hypervisor of infrastructure solutions like OpenStack or openQRM and enjoys a high priority within the open source community. KVM stands for a cost-efficient and especially powerful option to commercial offerings like VMware ESX or Microsoft Hyper-V. KVM has a market share of about 12 percent, due to the fact that Red Hat is using this hypervisor as the foundation for its virtualization solutions. Over time, the standard hypervisor KVM will be in a tight play with OpenStack as CIOs are presently searching for cost-effective capabilities to virtualize their infrastructure.
  4. Docker
    This year’s shooting star is Docker. The container technology, which was created as a byproduct during the development of platform-as-a-service “dotCloud”, currently experiences a strong momentum and gets support from large players like Google, Amazon Web Services and Microsoft. For a good reason. Docker enables the loosely coupled movement of applications that are bundled in containers, across several Linux servers, thus improving application portability. At first glance, Docker looks like a pure tool for developers. From the point of view of an IT decision-maker, however, it is definitely a strategic tool for optimizing modern application deployments. Docker helps to ensure the portability of an application, to increase the availability and to decrease the overall risk.
  5. Apache Mesos
    Mesos rose to a top-level project of the Apache Software Foundation. It was conceived at the University of California at Berkeley and helps to run applications in isolation from one another. At the same time, the applications are dynamically distributed on several nodes within a cluster. Mesos can be used with OpenStack and Docker. Popular users are Twitter and Airbnb. One of the driving forces behind Mesos is the German developer Florian Leibert, who was also jointly responsible for the implementation of the cluster technology at Twitter.

Open Source is eating the license-based world

Generally, proprietary players such as IBM, HP and VMware schmooze with open source technologies. HP’s first cloud offering “HP Cloud” already based on OpenStack. With HP Helion Cloud, the whole cloud portfolio (public, private) was harmonized via OpenStack. In addition, HP has become the biggest code contributor for the upcoming OpenStack “Juno”-release, which will be released in October. IBM contributes to OpenStack and uses Cloud Foundry as the foundation for its PaaS “Bluemix”. At VMworld in San Francisco, VMware announced a tighter cooperation with OpenStack as well as with Docker. In this context, VMware will present its own OpenStack distribution (VMware Integrated OpenStack (VIO)) in Q1/2015, which empowers the setup of an OpenStack implementation based on VMware vSphere. The Docker partnership shall ensure that the Docker engine runs on VMware Fusion and servers with VMware vSphere and vCloud Air.

Open source solutions like OpenStack are attractive not only for technical reasons. From a financial perspective, OpenStack also supplies an essential contribution, as the open source framework reduces the cost for building and operating a cloud infrastructure significantly. The license costs for current cloud management and virtualization solutions are around 30 percent of the overall cloud TCO. This means that numerous start-ups and large, well-respected software vendors like Microsoft and VMware make a good chunk of the business by selling licenses for their solutions. With OpenStack, CIOs gain the opportunity to conduct the provisioning and management of their virtual machines and cloud infrastructure via the use of open source technologies. To support this, free community editions as well as professional enterprise ready distributions including support are available. In both cases options to significantly reduce the license costs for operating cloud infrastructures. OpenStack empowers CIOs with a valuable tool to exercise pressure on Microsoft and VMware.

Data Center: Hello Amazon Web Services. Welcome to Germany!

$
0
0

The analysts already wrote about it. Now the in advanced but unconfirmed announcements are finally reality. Amazon Web Services (AWS) have opened a data center in Germany (region: eu-central-1). It is especially for the German market and AWS’ 11 region in the world.

Data centers in Germany are booming

These days it is fun to be a data center operator in Germany. Not only the “logistic centers of the future” are getting more and more into the focus because of the cloud and digital transformation. Also the big players in the IT industry throw their grapplers more and more in the direction of German companies. After Salesforce has announced it’s landing for March 2015 (partnership with T-Systems), next Oracle and recently VMware follows.

Against the general distributed opinion of a higher data security on German ground, these strategic decisions have nothing to do with data security for customers. Because a German data center on its own offers no higher data security but rather gives only the benefit to fulfill the legal requirements of the German data privacy level.

However, from a technical perspective locality is a big importance. Due to continuous relocation of business-critical
data, applications and processes to external cloud infrastructures, the IT-operating concepts (public, private, hybrid), as well as network architectures and connectivity strategies are significantly changing for CIO’s. On the one hand, modern technology is required to provide applications in a performance-oriented, stable and secure manner, on the other hand, the location is significantly decisive for optimal “Cloud- Connectivity“. Therefore it is important to understand that the quality of a cloud service is significantly dependent on its connectivity and the performance on the backend. A cloud service is only as good as the connectivity that provides it. Cloud-Connectivity – a minor latency as well as a high throughput and availability – is becoming a critical competitive advantage for cloud provider.

Now also the Amazon Web Services

Despite of concrete technical hints, AWS executives have cloaked in a mantle of secrecy. Against all dementi it is now official. AWS has opened a new region “eu-central-1” in Frankfurt. The region is based on two availability zones (two separated data center locations) and offers the whole functionality of the Amazon cloud. The new cloud-region is already operational and can be used by customers. With the location in Frankfurt Amazon opens its second region in Europe (besides Ireland). This empowers the customer to build a multi-region concept in Europe to ensure a higher availability of their virtual infrastructure from what also the uptime of the applications benefits.

That Frankfurt is the place to be for cloud providers is not uncommon. On the infrastructural side the location Frankfurt am Main is the backbone of the digital business in Germany. As far as data center density and connectivity to central internet hubs are concerned, Frankfurt is the leader throughout Germany and Europe. 
The continuous relocation of data and applications to external cloud provider infrastructures made Frankfurt the stronghold for cloud computing in Europe. 


In order to help its customers to fulfill the data privacy topics as well as on the technical site (Cloud-Connectivity) Amazon took the right decision on the German data center landscape. This also already happened in Mai this year when the partnership with NetApp for setting up hybrid cloud storage scenarios was announced.

Serious German workloads on the Amazon Cloud

A good indication for the attraction of a provider infrastructure is its reference customers. From the beginning AWS is focusing on startups. However, in the meantime they try everything to also attract enterprise customer. Talanx and Kärcher are already two well-known customer from the German company landscape.

The insurance company Talanx has shifted the reporting and calculation of its risk scenarios into the Amazon cloud (developed from scratch). Thereby Talanx is able to ban its risk management out of the own data center but is still Solvency II compliant. According to Talanx it is achieving a time advantage as well as annual savings at a height of eight million euro. The corporation and its CIO Achim Heidebrecht are already evaluating further applications to shift into the Amazon cloud.

Kärcher is the worlds leading manufacturer of cleaning systems and is using the Internet of Things (Machine-to-Machine communication) to improve its business model. For optimizing the usage of the worldwide cleaning fleet Kärcher is using the global footprint of Amazon’s cloud infrastructure. Kärcher’s machines regularly sending information into the Amazon cloud to be processed. In addition, through the Amazon cloud Kärcher is providing information to its worldwide partner and customer.

Strategic vehicle: AWS Marketplace

Software AG is the first best-known traditional ISV (Independent Software Vendor) on the Amazon cloud. The popular BPM tool Aris is now available as “Aris-as-a-Service” (SaaS) and is delivered scalable using the Amazon cloud infrastructure.

Software AG is only one example. Several German ISVs could follow. Especially the global scalability of Amazon’s cloud infrastructure makes it to an attractive partner to deliver SaaS applications soon to target customers beyond the German market. In this context the AWS Marketplace plays a key role. AWS owns a marketplace infrastructure ISVs can use to provide their solutions to a broad and worldwide audience. The benefit for the ISV:

  • He develops on the Amazon cloud and doesn’t need an own (global) infrastructure.
  • He develops his solutions as an “as-a-service” and distributes it over the AWS Marketplace.
  • He uses the popularity and scope of the marketplace.

This scenario means one thing for AWS: The cloud giant wins in any case. As long as the infrastructure resources are used the money is rolling.

Challenges of the German market

Despite their innovation leadership public cloud provider like AWS are having hard times at German companies. Especially at the powerful Mittelstand. For the Mittelstand self-service and the complex use of the cloud infrastructure is one of the main reasons to avoid using the public cloud. In addition, even if it has nothing to do with the cloud, the NSA scandal has left psychological scars at German companies. Data privacy connected with US provider is the icing on the cake.

Nevertheless, with the data center in Frankfurt AWS has carried out its duty. However, to be successful on the German market there are still things left. These are:

  • Building a potent partner network to appeal the mass of German enterprise customers.
  • Reduce the complexity by simplify the use of the scale-out concept.
  • Strengthens the AWS Marketplace for the ease of use of scalable standard workloads and applications.
  • Increasing the attractiveness for German ISVs.

The impact of OpenStack for the cloud sourcing

$
0
0

In 2014 German companies will invest around 6.1 billion euro in cloud technologies. Thus, cloud sourcing is already seven percent of the whole IT budget. For this reason the importance of cloud ecosystems and cloud marketplaces are getting a higher significance in the future.

Crisp Research predicts the amount of cloud services that are traded using cloud marketplaces, platforms and ecosystems of around 22 percent until 2018. However, the basic requirement for this is to eliminate the current weak spots:

  • The lack of comparability,
  • Minor transparency,
  • As well as a poor integration.

These are elemental factors for a successful cloud sourcing.

Openness vs. Comparability, Transparency and Integration

In bigger companies the cloud sourcing process and the cloud buying center are dealing with a specific complexity. This is due to the challenge that the cloud environment is based on several operation models, technologies and vendors. On average smaller companies using five vendors (e.g. SaaS). Big and worldwide distributed companies are dealing with over 20 different cloud providers. On the one hand this shows that hybrid and multi cloud sourcing is not a trend but reality. On the other hand that data and system silos even in cloud times are an important topic. But, how should IT buyer deal with this difficult situation? How could a dynamic growing portfolio be planned and developed in the long-term and how is the future safety guaranteed? These are different challenges that should not be underestimated. The reason is obvious: Over the last years neither cloud providers nor organizations or standardization bodies were able to create mandatory and viable cloud standards.

Without these standards clouds are not comparable among themselves. Thus, IT buyer had a lack of comparability. On the technical as well as on the organizational level. In this context contracts and SLAs are one issue. More difficult and chancier it is becoming in the technical context. Each cloud infrastructure provider has its own magical formula on how the performance of a single virtual machine is composed. This lack of transparency leads to a bigger overhead for IT buyer and increases the costs for planning and tendering processes. The IaaS providers are fighting out their competition on the back of their customers. Brave new cloud world.

Another problem is the bad integration of common cloud marketplaces and cloud ecosystems. The variety of services on these platforms is growing. However, the direct interaction between these different services within a platform was neglected. The complexity increases when services are integrated across infrastructure, platforms respectively marketplaces. Today, without a big effort deep process integration is not possible. This is mostly due to the fact that each closed ecosystem is cooking one’s own meal.

Standardization: OpenStack to set the agenda

Proprietary infrastructure foundations could have an USP for the provider. However, at the same time they are leading to a bad interoperability. This leads to enormous problems during the use across providers and increases the complexity for the user. Thus, the comparison of the offerings is not possible.

Open source technologies are putting things right in this situation. Based on the open approach several providers are taking part in projects in order to push the solution and of course to represent the own interests. Therefore it turns out that an audit authority is necessary to increase the distribution and adaption. The benefit: If more than one provider is using the technology, this leads to a better interoperability across the providers and the user is getting a better comparability. In addition, the complexity for the user decreases and thus the effort during the use across providers – e.g. the setup of hybrid and multi cloud scenarios.

A big community of interests, where well-known members are pushing the technology and using it for their own purposes, is leading to a de-facto standard over time. This is a technical standard, which is “[…] may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.”

How this works the open source project OpenStack shows impressively. Since its start in 2010 the framework for building public and private cloud infrastructures is getting a lot of attention and has a big constant momentum. By now OpenStack is the foundation of several public cloud infrastructures and product portfolios, among others Rackspace, HP, IBM, Cisco and Oracle. But also many enterprises have discovered OpenStack for their private cloud environments, e.g. Wells Fargo, Paypal, Bloomberg, Best Buy and Walt Disney.

Because of the open approach as well as the continuous development by the huge and potent community (every six month a new version is released) OpenStack is a reliable and trustable partner for IT infrastructure manager. Professional distributions are helping to increase the footprint on the user side and make sure that more and more IT decision maker of bigger companies are building their cloud infrastructure based on OpenStack in the future.

This positive development also arrived in Germany. The results of a current Crisp Research study (“OpenStack im Unternehmenseinsatz”, German) show that almost 50 percent of the cloud users know OpenStack. Already 29 percent of the cloud users engage actively with OpenStack.

The OpenStack ecosystem is still getting bigger and thus pushing the standardization in the cloud. For this reason, IT buyers are getting a better scope while purchasing cloud resources from several providers. But they should keep in mind that their IT architects would entirely separate more from the underlying infrastructure in the future to move applications and workloads on demand across providers. Container technologies like Docker – supported by OpenStack – are pushing this trend.

Think across marketplaces

Cloud marketplace provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider as well as between several marketplaces and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market.

Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver – if all involved parties are admitting to a common standard. In terms of the users.

Analyst Strategy Paper: Open Cloud Alliance – Openness as an Imperative

$
0
0

Future enterprise customers will be accessing a mixture of proprietary on-premises IT, hosted cloud services of local providers and globally active cloud service providers. This is a major opportunity for the market and all participants involved. This is especially so for small hosters with existing infrastructures, as well as for system integrators with the appropriate know- how and existing customer relations.

In light of this, in this strategy paper Crisp Research investigates the challenges that both provider groups are facing in this situation, and deals with the most important aspects and their solutions.

The strategy paper can be downloaded under “Open Cloud Alliance – Openness as an Imperative“.

Build or Buy? – The CIOs OpenStack Dilemma

$
0
0

OpenStack has become the most important open source project for cloud infrastructure solutions. Since 2010, hundreds of companies are participating in order to develop an open, standardized and versatile technology framework, which can be used to manage compute, storage and networking resources in public, private and hybrid cloud environments. Even though OpenStack is an open source solution, this does not imply that the setup, operation and maintenance are easy to handle. OpenStack can behave like a true beast. A number of CIOs who are running self-developed OpenStack infrastructure are reporting significant increases in cost and complexity. They have made several fine-tunings to fit OpenStack to their individual needs, developing OpenStack implementations that are no longer compatible with the current releases. This leads to the question whether a build or buy strategy is the right approach in deploying OpenStack in the captive IT environment.

OpenStack to gather pace

OpenStack has quickly become an essential factor in the cloud infrastructure business. Started in 2010 as a small open source project, the solution is used by hundreds of enterprises and organizations in the meantime including several big companies (PayPal, Wells Fargo, Deutsche Telekom), as well as innovative startups and developers. In the early days its initiators used OpenStack to build partly proprietary cloud environments. More than 850 companies are now supporting the project, among them IBM, Oracle, Red Hat, Cisco, Dell, Canonical, HP and Ericsson.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Alongside the continuous improvement of the technology, the adoption rate accordingly increases. This can be seen in the worldwide growth of the OpenStack projects (increase by 128 percent in 2014, Q1 (105) –> Q4 (239)). Here, on-premise private clouds are by far the preferred deployment model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. For the next 12 months, Crisp Research expects a 25 percent growth for OpenStack based enterprise private clouds.

OpenStack: Build or Buy?

OpenStack offers capabilities to operate environments in synergy with a variety of other open source technologies and at the same time to be cost-efficient (no or only minor license costs). However, the complexity level in this case increases dramatically. Even if CIOs tend to use OpenStack only as a cloud management layer, there is still a high degree of complexity to manage. Most of the OpenStack beginners are not aware that OpenStack has more than 500 buttons to configure OpenStack clouds the right way.

The core issue for most of the companies who want to benefit from OpenStack is: Build or Buy!

During the preparation and evaluation of the build or buy decision companies should absolutely consider the in-house experiences and technical knowledge with respect to OpenStack. IT decision makers should bring the internal skills into question and clearly define their requirements in order to compare them with the offerings of OpenStack distributors. Analogous to the Linux business, OpenStack distributors offer ready bundled OpenStack versions including support – mostly with integration services. This reduces the implementation risk and accelerates the execution of the project.

The CIO is in demand

For quite some time, CIOs and cloud architects are trying to answer the question of how they should build their cloud environments in order to match their companies’ requirements. After the last years had been used for “trial and error” approaches and most of the cloud infrastructures had an exploratory character, it is about time to implement large-volume projects within the production environments.

This raises the question which cloud design is the right one that IT architects can use to plan their cloud environments. Crisp Research advises to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. Especially in the areas of openness and efficiency OpenStack can make a significant contribution.

The complete German analyst report “Der CIO im OpenStack Dilemma: BUY oder DIY?” can be downloaded under http://www.crisp-research.com/report-der-cio-im-openstack-dilemma-buy-oder-diy.

OpenStack Deployments Q4/2014: On-Premise Private Clouds continue to lead the pack

$
0
0

Around 4.600 attendees at the OpenStack Summit in Paris made a clear statement. OpenStack is the hottest open source project for IT infrastructure and cloud environments in 2014. The momentum, driven by an ever-growing community, is reflected in the technical details. Juno, the current OpenStack release, includes 342 new features, 97 new drivers and plugins together with 3,219 fixed bugs. 1,419 contributors supported Juno with code and innovations. An increase by 16 percent from the former Icehouse release. According to the OpenStack Foundation, over the last six month the ratio of production environments has increased from 33 percent to 46 percent. Most of the users are coming from the US (47 percent), followed by Russia (27 percent) and Europe (21 percent).

Thus, even though the total number of new projects in Q4 went up by only 13 percent as compared to Q3, the appeal is still unabated. On-premise private clouds are by far still the preferred deployment model. In Q3 2014, the OpenStack Foundation registered 114 private cloud installations worldwide. In Q4, the number grew to 130. By comparison, the number of worldwide OpenStack public clouds grew by 13 percent. With 3 percent, hosted private cloud projects reveal the merest growth.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Annualized, the worldwide growth of the overall OpenStack projects increased by 128 percent in 2014, Q1 (105) –> Q4 (239). Regarding the total number of deployments, on-premise private clouds are by far the preferred model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. However, with an increase by 140 percent, hybrid clouds show the biggest growth.

Private clouds clearly have the biggest appeal, because through them cloud architects have found an answer how to build cloud environments tailored to specific needs. OpenStack supports the necessary features in order to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. The project makes a significant contribution, especially in the areas of openness and efficiency. After years of “trial and error” approaches and cloud infrastructure with an exploratory character, OpenStack is the answer when it comes to implementing large-volume projects within production environments.

OpenStack on-premise private cloud lighthouse projects are run by Wells Fargo, Time Warner, Overstock, Expedia, Tapjoy and CERN (also hybrid cloud). CERN’s OpenStack project in particular is an impressive example and shows the capabilities of OpenStack to be the foundation of an infrastructure for massive scale. Some facts about CERN’s OpenStack project were presented by CERN Infrastructure Manager Tim Bell at the OpenStack Summit in Paris:

  • 40 million pictures per second are taken
  • 1 PB of data per second are stored
  • 100 PB archive of data (plus: 27 PB per year)
  • 400 PB per year by 2023 are estimated
  • 11,000 servers
  • 75,000 disk drives
  • 45,000 tapes

The CERN operates a total of four OpenStack based clouds. The largest cloud (Icehouse release) runs around 75,000 cores on more than 3,000 servers. The three other clouds have a total of 45,000 cores. The CERN expects to pass 150,000 cores by Q1 2015.

Further interesting OpenStack projects can be found under http://superuser.openstack.org.


Platform-as-a-Service: Strategies, technologies and providers – A compendium for IT decision-makers

$
0
0

“Software is eating the world” – With this sentence in an article for the Wall Street Journal in 2011, the inventor of Netscape and famous venture capitalist Marc Andreesen described a trend which is today more than apparent: software will provide the foundation for transforming virtually all industries, business models and customer relationships. Software is more than just a component for controlling hardware: it has become an integral part of the value added in a large number of services and products. From the smartphone app to the car computer. From music streaming to intelligent building control. From tracking individual training data to automatic monitoring of power grids.

60 years after the start of the computer revolution, 40 years after the invention of the microprocessor and 20 years after the launch of the modern internet, the requirements for developing, operating and above all distributing software have changed fundamentally.

But not just the visionaries from Silicon Valley have rec- ognised this structural change. European industrial concerns such as Bosch are in the meantime also investing a large percentage of their research and development budgets in a new generation of software. So we find Bosch CEO Volkmar Denner announcing that by 2020 all equipment supplied by Bosch will be internet-enabled. Because in the Internet of Things, the benefit of products is only partially determined by their design and hardware specification. The networked services provide much of the benefit – or value added – of the product. And these services are software-based.

The implications of this structural shift have been clearly apparent in the IT industry for a few years now. Large IT concerns such as IBM and HP are trying to reduce their dependency on their hardware business and are investing heavily in the software sector.

In the meantime, the cloud pioneer Salesforce, with sales of four billion USD, has developed into one of the heavyweights in the software business. Start-ups like WhatsApp offer a simple but extremely user-friendly mobile app which have gained hundreds of millions of users worldwide within a few years.

State-of-the-art software can cause so-called disruptive changes nowadays. In companies and markets. In politics and society. In culture and in private life. But how are all the new applications created? Who writes all the millions of lines of code?. Which are the new platforms upon which the thousands upon thousands of applications are developed and operated?

This compendium examines these questions. It focuses par- ticularly on the role that “Platform as a Service” offerings play for developers and companies today. Because although after almost a decade of cloud computing the terms IaaS and SaaS are widely known and many companies use these cloud services, only a few companies have so far gathered experience with “Platform as a Service”.

The aim is to provide IT decision-makers and developers with an overview of the various types and potential applications of Platform as a Service. Because there is a wide range of these, extending from designing business processes (aPaaS) right through to the complex integration of different cloud services (iPaaS).

With this compendium, the authors and the initiator, PIRONET NDH, wish to make a small contribution to the better under- standing of state-of-the-art cloud-based software development processes and platforms. This compendium is designed to support all entrepreneurs, managers and IT experts who will have to decide on the development and use of new applications and software-based business processes in the coming years. The authors see Platform as a Service becoming one of the cornerstones for implementing the digital transformation in companies, because the majority of the new digital applications will be developed and operated on PaaS platforms.

The compendium can be downloaded free of charge under “Platform-as-a-Service: Strategies, technologies and providers – A compendium for IT decision-makers“.

Cloud Computing Adoption in Germany in 2014

$
0
0

A current study by Crisp Research including 716 IT decision makers shows a representative picture of the cloud adoption in the German (DACH) market. In 2014, the cloud has been embraced by IT departments. More than 74 percent of the companies are already planning and implementing cloud services and technologies in production environments. Only about one out of four (26 percent) of the surveyed said that at present and in the future the cloud is not an option for their company.

The cloud naysayers are primarily represented by small and medium-sized enterprises (more than 50 percent of all cloud naysayers are from companies with up to 500 employees). The number of companies from this domain who are actively planning or implementing cloud technologies is also significantly low. One reason for this is the low affinity for IT at enterprises of this size. Many small and medium-sized enterprises still do not understand the different IaaS, PaaS and SaaS possibilities and are not familiar with the offered cloud services. In addition, compared to bigger companies, the smaller ones place higher weight on the risk associated with use of cloud services. IT departments of larger companies are better equipped with protection measures and knowledge of IT security, and so feel less exposed to the risk of data misuse or loss.

Cloud naysayers are a dying species in large businesses with at least 5000 employees (less than 20 percent). IT managers are already implementing cloud operation processes and are looking for the most intelligent architectures as well as the most secure systems. For them the question is not “if” but rather “how” they can use cloud technologies.

According to the survey, companies with 5000 – 10000 employees are the frontrunner. The percentage of companies that describe the cloud as a solid element of their IT strategy and IT operations is around 39.5 – the peak in the study. For companies of this size, the number of cloud naysayers is also the lowest – only 16.3 percent.

In very large companies with more than 10000 employees cloud usage is flattening a little. There are various reasons for this. On the one hand, the increasing IT and organizational complexity lead to longer planning and implementation processes. Discussions of security and governance processes also take respectively more time and naturally lead to demands for higher standards, which sometimes cannot be fulfilled by the cloud provider. On the other hand, the tendency to develop own applications to manage individual software solutions and processes in this enterprise category is higher. The stronger financial resources and the availability of own, comprehensive data center capabilities allow for additional demands to be handled internally, even if the implementation is not as flexible and innovative as the internal users and departments require.

Plain-speaking: Data Privacy vs. Data Security – Espionage in the Cloud Age

$
0
0

When it comes to data privacy, data security and espionage, there are myths and obscurities to consider. Over and over, false truths continuously circulate, especially within the context of the cloud. Vendors shamelessly take advantage of customer insecurities by using incorrect information in their PR or marketing activities.

Causing confusions and hoaxes

Headlines like „Oracle invests in Germany for data security reasons“ are just one of many examples of information misinterpreted by the media. However, the ones who should know better – the vendors – are doing nothing to provide better clarity. On the contrary, the fears and concerns of the users are used without mercy to make business. For example, Oracle’s country manager Jürgen Kunz justifies both new German data centers by stating that “In Germany data security is a particularly sensitive topic.” The NSA card is easy to play these days by just saying, “… that Oracle, as a US company, stays connected to the German market.”

However, the location has nothing to do with data security and the NSA scandal. If intelligence is getting access to the data in a data center in Germany, the US, Switzerland or Australia, this has very little to do with the country itself. If the cloud provider sticks to its own global policies for data center security on the physical as well as the virtual level, a data center regardless of the location should overall provide the same level of security. Storing data in Germany is no guarantee for a higher level of security. A data center in the US, UK or Spain is just as secure as a data center in Germany.

The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

What is data security

Data security means the implementation of all technical and organizational procedures in order to ensure confidentiality, availability and integrity for all IT systems.

Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

What is data privacy

Data privacy is about the protection of personal rights and privacy during the data processing.

This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers.

Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

The NSA is a false pretense. Espionage is ubiquitous.

Espionage is ubiquitous. Yes, even in countries like Germany. Although one should not forget that each company could have a potential Edward Snowden in its home. The employee still feels comfortable but what happens when he receives a more attractive offer or the culture in the team or the company changes? Insider threat presents a much greater danger than external attackers or intelligence. The former hacker Kevin Mitnick describes in his book “The Art of Deception” how he got all the information in order to prepare his attacks by simply browsing the trash of his victims and using techniques of social engineering. In his cases it was more about the manipulation of people and extensive research instead of the capturing of IT systems.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

The cloud is the scapegoat!

The cloud is not the issue. To use espionage as an excuse for not using cloud services is too easy. Bottom line, in the times before the cloud, the age of outsourcing, it was also possible to spy. And the intelligence did it. Despite the contracts with their customers, providers were also able to secretly give data to the intelligence.

If espionage had been in the focus during the age of outsourcing as it is today, outsourcing would have been demonized by now. Today’s discussions are a relevant product of the political situation in which the lack of trust characterizes the formerly established economic, military and intelligence partnerships.

Due to the amount of data that cloud providers are hoarding and merging, today they have become more attractive. Nevertheless, for an outsider to get access to a data center takes a lot of effort. Andrew Blum describes in his book „Tube: Behind The Scenes At The Internet“ that because of the high connectivity to other countries (e.g. the data from Tokyo to Stockholm or data from London to Paris), one of the first Internet hubs „MAE-East“ (1992) had quickly become an objective of the US espionage. No wonder, since MAE-East was the de-facto way into the Internet. Bottom line is, intelligence does not need to make a footstep into one single provider data center – it simply needs to hijack a connectivity hub to eavesdrop the data lines.

The so-called “Schengen-Routing” is discussed in this context. The idea is to let the data traffic stay in Europe as data are transferred only between hosts in Europe. Theoretically, this sounds like an interesting idea. In practice it is totally unfeasible. When using cloud services from US providers the data are routed through the US. If an email from a German provider’s account is sent to an account managed by a US provider, the data need to leave Europe. In addition, for many years we have been living in a fully interconnected world where data are exchanged globally. And there is no way back.

A more serious issue is the market power and the clear innovation leadership of the US when compared to Europe and Germany. The availability and competitiveness of German and other European cloud services is still limited. The result is that many companies have to use cloud services from US providers. Searching for a solution only on the network layer is useless, unless competitive cloud services, infrastructure and platforms of European providers are available. Until then, only data encryption helps in order to avoid intelligence and other criminals from accessing the data.

It is imperative that European and German providers develop and market innovative and attractive cloud services, because a German or European data center on its own has little to do with a higher data security. It just offers the benefit of the German/ European data privacy standard to fulfill the regulatory framework.

– – –
Picture source: lichtkunst.73 / pixelio.de

Q & A Panel with Media and Analyst Covering OpenStack

Signature Project: SAP Monsoon

$
0
0

Until now, SAP didn’t make a strong impression in the cloud. The Business-by-Design disaster or the regular changes in the cloud unit’s leadership are only two examples that reveal the desolate situation of the German flagship corporation from Walldorf in this market segment. At the same time, the powerful user group DSAG attempts a riot. The complexity of SAP’s cloud ERP as well as the lack of HANA business cases are some of the issues. The lack of transparency of prices and licenses as well as a sinking appreciation for the maintenance agreements, since the support doesn’t justify the value of the supporting fees, are leading to uncertainty and irritation on the customer side. To add to this, a historically grown and complex IT infrastructure causes a significant efficiency bottleneck in the operations. However, a promising internal cloud project might set the course for the future if it is thoroughly implemented: Monsoon.

In the course of the years, the internal SAP cloud landscape has evolved into massive but very heterogeneous infrastructure boasting an army of physical and virtual machines, petabytes of RAM and petabytes of cloud storage. New functional requirements, changes in technology as well as a number of M&As have resulted in various technology silos, thus greatly complicating any migration efforts. The new highly diverse technology approaches and a mix of VMware vSphere and XEN/KVM distributed over several datacenters worldwide lead to increasingly high complexity during the SAP’s infrastructure operations and maintenance.

The application lifecycle management is the icing on the cake as the installations; respectively, the upgrades are manual, semi-automated or automated, as determined by the age of the respective cloud. This unstructured environment is by far not an SAP-specific problem, but represents rather the reality in middle to big size cloud infrastructure whose growth has not been controlled over the last years.

„Monsoon“ in focus – a standardized and automated cloud infrastructure stack

Even if SAP is in good company with the challenge, this situation leads to vast disadvantages at the infrastructure, application, development and maintenance layers:

  • The time developers wait for new infrastructure resources is too long, leading to delays in the development and support process.
  • Only entire releases can be rolled out, a stumbling block which results in higher expenditures in the upgrade/ update process.
  • IT operations keep their hands on the IT resources and wait for resource allocation approvals by the responsible instances. This affects work performance und leads to poor efficiency.
  • A variety of individual solutions make a largely standardized infrastructure landscape impossible und lead to poor scalability.
  • Technology silos distribute the necessary knowledge across too many heads and exacerbate the difficulties in collaboration during the troubleshooting and optimization of the infrastructure.

SAP addresses these challenges proactively with its project “Monsoon”. Under the command of Jens Fuchs, VP Cloud Platform Services Cloud Infrastructure and Delivery, the various heterogeneous cloud environments are intended to become a single homogeneous cloud infrastructure, which should be extended to all SAP datacenters worldwide. Harmonized cloud architecture, widely supported uniform IaaS management, as well as an automated end-to-end application lifecycle management form the foundation of the “One Cloud”.

As a start, SAP will improve the situation of its in-house developers. The foundation of a more efficient development process is laid out upon standardized infrastructure, streamlining future customer application deployments. For this purpose, “Monsoon” is implemented in DevOps mode so that development and operations of “Monsoon” is split into two teams who work hand in hand to reach a common goal. Developers are getting access to required standardized and on-demand IT resources (virtual machines, developer tools, services) through a self-service portal. Furthermore, this mode enables the introduction of the so called continuous delivery. This means that parts of “Monsoon” have already been implemented and used actively in production while other parts are still in development. After passing through development and testing, components are being directly transferred into the production environment without wait time for a separate release cycle. As a result, innovation growth is fostered.

Open Source and OpenStack are the imperatives

The open source automation solution Chef is the cornerstone of “Monsoon’s” self-service portal, enabling SAP’s developers to deploy and automatically configure the needed infrastructure resources themselves. This also applies to self-developed applications. In general, the “Monsoon” project makes intensive use of open source technologies. In addition to the hypervisors XEN and KVM, other solutions like the container virtualization technology Docker or the platform-as-a-service (PaaS) Cloud Foundry are being utilized.

The anchor of this software-defined infrastructure is OpenStack. The open source project that can be used to build complex and massive scalable cloud computing infrastructure supports IT architects during the orchestration and the management of their cloud environments. Meanwhile, a powerful conglomerate of vendors stand behind the open source solution, trying to position OpenStack and their own services built on OpenStack prominently in the market. Another wave of influence emerges through a range of developers and other interested parties who provide their contributions to the project. At present, around 19,000 individuals from 144 countries participate in OpenStack, signifying that the open source project is also an interest group and a community. The broad support can be verified by a range of service providers and independent software vendors who have developed their services and solutions compatible to the OpenStack APIs. Since its development, OpenStack has continuously evolved into an industry standard and is destined to become the de facto standard for cloud infrastructure.

At the cloud service broker and cloud integration layer, SAP “Monsoon” sets on OpenStack Nova (Compute), Cinder (Block Storage), Neutron (Networking) and Ironic (Bare Metal). OpenStack Ironic facilitates “Monsoon” to deploy physical hosts as easily as virtual machines. Among other things, the cloud service management platform OpenStack is responsible for authentication, metering, as well as billing and orchestration. OpenStack’s infrastructure and automation API helps developers to create their applications for “Monsoon” and deploy them on top. In addition, external APIs like Amazon EC2 can be exploited in order to distribute workloads over several cloud infrastructures (multi cloud).

On the one hand, this open approach gives SAP the ability to build standardized infrastructure, in order to support VMware vSphere alongside OpenStack. On the other hand, it is also possible to execute hybrid deployments for both internal and external customers. The on-demand provisioning for virtual as well as physical hosts completes the hybrid approach. Compared to virtual machines, the higher performance of physical machines shouldn’t be underestimated. HANA will appreciate it.

Study: SAP weigh as a powerful OpenStack partner

SAP’s open source focus on OpenStack is nothing new. First announcements have already been made in July 2014 and show the increasing importance of open source technologies to well-established industry giants.

In the meantime, SAP’s OpenStack engagement also got around on the side of the user. In the context of the very first empirical OpenStack study in the DACH market “OpenStack in the Enterprise”, Crisp Research asked 700+ CIOs about their interests, plans and operational status of OpenStack.

The study concluded that cloud computing has finally arrived in Germany. For 19 percent of the sampled IT decision makers, cloud computing is an inherent part on their IT agenda and the IT production environments. 56 percent of German companies are in the planning or implementation phase and are already using cloud as part of first projects and workloads. One can also say that in 2014 the OpenStack wave has also arrived in Germany. Almost every second cloud user (47 percent) has heard of OpenStack. At present, already 29 percent of the cloud users are actively dealing with the new technology. While 9 percent of the cloud users are still in the information phase, already one in five (19 percent) have started planning and implementing their OpenStack project. However, only two percent of the cloud users are using OpenStack in their production environments. Therefore, OpenStack is only a topic for pioneers.

On the subject of performance ability of OpenStack partners, the study has shown that cloud users supportive of OpenStack appreciate SAPs OpenStack engagement and respectively expect a lot from SAP. Almost half of the sampled IT decision makers attribute “a very strong” performance ability to SAP, IBM and HP.

„Monsoon“ – Implications for SAP and the (internal) Customer

In view of the complexity associated with “Monsoon”, the project rather deserves the name “Mammoth”. To move a tank ship like SAP into calm waters is not an easy task. Encouraging standardization within a very dynamic company will raise anticipated barriers. In particular when further acquisitions are pending, the main challenges are to integrate these into the existing infrastructure. However, “Monsoon” seems to be on the right way to building a foundation for stable and consistent cloud infrastructure operations.

As a start, SAP will benefit organizationally from the project. The company from Walldorf promises its developers time savings of up to 80 percent for the deployment of infrastructure resources. As a consequence, virtualized HANA databases can be provided completely automated, thus decreasing the wait time from about one month to one hour.

In addition to the time advantage, “Monsoon” also helps its developers to focus on their core competencies (software development). In former times, developers were involved in further processes such as configuration and provisioning of the needed infrastructure; now they can independently deploy virtual machines, storage or load balancers in a fully automated way. Besides fostering the adoption of a cost-effective and transparent pay-per-use model where used resources are charged by the hour, standardized infrastructure building blocks also support cost optimization. For this purpose, infrastructure resources are combined into standardized building blocks and provided across the SAP’s worldwide datacenters.

The introduced continuous delivery approach by “Monsoon” is well positioned to gain momentum at SAP. The “Monsoon” cloud platform is regularly extended during operations and SAP is saying good-bye to fixed release cycles.

External customers will benefit from “Monsoon” in the mid-term, as SAP is using collected experiences from the project during the work with its customers and will also flow them into future product deployment (e.g. continuous delivery).

SAP is burning too many executives in the cloud

SAP will not fail with the technical implementation of “Monsoon”. The company is employing too many high-qualified employees who are equipped with the necessary knowledge. However, the ERP giant is incessantly showing signs of weakness on the organizational level. This vehemently raises the question why ambitious employees are never allowed to implement their visions to the end. SAP has burned several of its cloud senior managers (Lars Dalgaard is a prime example). For some reason, committed and talented executives who try to promote something within the company seem to have a tough act to follow.

SAP should start to act according to the terms of its customers. This means not only thinking about the shareholders, but also following a long-term vision (greetings from Amazon’s Jeff Bezos). The “Monsoon” project could be a beginning. Therefore, Jens Fuchs and his team are hopefully allowed to implement the ambitious goal – the internal SAP cloud transformation – successfully to the end.

– – –
Image source: Christian heinze / pixelio.de

Viewing all 87 articles
Browse latest View live




Latest Images