Head in the “cloud” feet on the ground

Recently reviewing a   a number of standards, products, technologies with purpose of extending   management systems into the cloud based on criteria of  minimal development, simplicity, reuse, automation, and scalability.

There are several  distinct business models in existence, namely the internet service provider view, otherwise known as Over-the-Top (OTT) by the  communication service provider community ie. Telecom operator or CSP, and the emerging service ecosystem based on medium to  large scale data center capability and associated services.

The IETF has provided the backbone of internet reference standards, and these have been applied with great ingenuity by the major internet service providers, while they have focused on scaling their ecosystems according to large scale and grid computing principles, merging into fully fledged SaaS and infrastructure “cloud” based utility services.

The communication service providers realizing that revenue is systematically being eroded by the so called OTT’s have taken steps to reduce costs  and monetize their infrastructure while entering into partnerships within an admittedly limited regulatory framework.Certainly, the current network operating model is more about cost control than it is about monetization.

Given the costs of rolling out 5G capability in optical fiber and supporting back end systems, it is difficult to see how operators would provide ROI  for these new infrastructure services over time, while essentially functioning as a utility for third parties, and cutting internal costs. The more they cut internally the less capable they become in terms of innovation and service delivery.

So they have taken steps to address this through partnering with innovative vendors in cloudification, emulating internet service providers, associating with open-source communities such as the Linux foundation, and partnering with an extended B2B and MVNO ecosystem.

However they face the major challenge of the network.

Network computing (I/O control &  management)  is one of the most difficult areas for the cloud, but is also one of the greatest opportunities for CSP’s and other potential entrants.  Internet providers do not “do” network although they are certainly positioning for this requirement.

To this end standards bodies such as the Telemanagement forum, the Broadband forum & ETSI have been working to develop network interoperability standards such that the rollout and operational  management of services can be simplified, automated and reused in diverse ways.

However, this work is thwarted by several realities summarized as follows:

  • Time – things are moving fast and playing catch up places a CSP into reactive mode while technology is evolving – this means that solutions developed today may be thrown away tomorrow as obsolete, this aspect places risk on vendor investment as well as on operator technology adoption
  • Existing operational infrastructure implies migration and dual operating modes, which are high risk exercises, impacting quality of service and organizational coherence
  • Regulatory constraints – Make investments uncertain, as it is unclear which services may be offered in the near future and thus which business  priorities need to be developed
  • Complexity of emerging standards – The implementation of 5G with a focus on NFV powered by SDN capability is work in progress, while standards do address only part of the solution ( The network ) leaving  management and control components open to implementation via a complex overlay of off the shelf open-source and development paradigms ( OPNFV, OpenStack, OpenDaylight, ONOS and others )
  • The generalization of the  management layer has been addressed to some extend by the emerging ECOMP and other relevant open source initiatives, and yet there is complexity in integrating this solution to the underlying alternatives for NFV and SDN, as well as to existing operational ecosystems

It is this opinion that progress can be made through iterative efforts in incrementally and iteratively applying sound software engineering principles to generic open  platforms, while focusing on point solutions to discrete business cases.

Leveraging  one or more of the existing and  emerging standards is a definite, at least as a reference point in design, and a utilization of components where these exist.

But ultimately it is a fact that solutions will develop in keeping with market forces, and capabilities, not by standards alone, technology is moving faster than adoption.

Therefore it is key to understand the key design and implementation aspects  which can be leveraged to drive change and readiness in CSP’s capability.

The following items are listed in no particular order, however early analysis and business case definition will to a large extent guide the process :

  • Understanding the size and scale of the business case
  • Modelling the components throughout the system stack
  • Identifying key security and operational constraints
  • Applying Virtualization and open cloud adoption, where feasible
  • Development of  selected “cloud” capabilities for storage and compute nodes – this is feasible given availability of  statistical multiplexing software, but scaling down is more challenging than scaling up
  • Development of a minimal container ecosystem on which VNF’s can run
  • Decomposition of the network control and data transfer functions ( See modelling exercise )
  • Limited application of one or more of the existing NFV environments, integrated with   a virtualized compute and storage stack implemented with some form of containerization
  • Integration and migration of existing infrastructure to a cloudified environment

The exercise is largely one of decomposition, standards based design, open product selection, and pilot development and testing – there is no silver bullet at this stage.

The question who is best positioned to  execute this work given that existing vendors are understandably unwilling to cannibalize their existing licensing model in favor of building open software based ecosystems, while most operators are not set  up for large scale software development. They used to be, but that was a long time ago.

Equally, there is an open field for  service provider entrants & MVNO’s positioned to offer selected services,  based on the virtual/cloud   paradigm, provided that they are ready and able to  invest in  the set-up of the required   ecosystems.

Partnering with an integrator and/or  system vendor who is willing, and able to design and implement for “brownfield”,  integrating existing critical  operations, is key to this process and represents opportunities  for both parties.

Establishing an internal minimal architecture function is a necessity, as is keeping track of internal  road map and prioritization functions, applying impact and risk analysis to the supply chain.

In particular there are certain key differentiators which can be realized as revenue  once high quality of service, virtualized ( not necessarily cloud based ) infrastructure is in place, these need to be qualified in terms of legal and regulatory constraints, privacy and security.

In the next  chapter on this subject, we will continue with feedback on these potential revenue streams,  & constraints thereof.

 

 

 

 

 

 

 

 

 

 

 

 

Lost in the (code) translation

A recent article in the Atlantic magazine highlighted   aspects of engineering tunnel vision resulting in  proliferation of code, tools, systems, and standards, and failures in their implementation.

There are two distinct threads running through this, firstly the need for simplifying and structuring the number of systems, code base, tools and standards applied interchangeably in solving identical problems, and  achieving greater reliability in doing so.

Secondly, the need for  creating user  responsive capability in developing software solving real-world problems, in industry, health, science, rather than simplifying petty inconveniences, ie. as in “apps” development, for improved delivery services, or social media.

We are reaching  the stage where experimenting with and applying  various tools, resulting in an  identical solutions to  similar problems,  provides indistinct value, unless tempered with an underlying laser like focus on convergence to user needs.

How many coding systems, applications, ecosystems, and standards do we really need to make things happen?

It took over 2 million years to develop the hammer in its present form, but software acceleration is faster,  dispersed and mechanistic in nature, and does not always reflect comparative progress  in usability and function.We may die if avionics fail. We will not go hungry, or freeze,  if we don’t download the next app.

Coding as an intellectual exercise, brings with a narrow mechanistic  focus, offsetting any  cross pollination benefits.

While keeping people in work and a vibrant consulting & training industry in shape, the focus on technology as opposed to applying thought &  rigor to practice, has led to a great deal of inertia in progressing the cause of useful coding, tools, systems, and usability.

This is evident for example in the “agile” development practices seen as panaceas addressing inertia in applying technology to business problems.

Instead of focusing on the drivers, and high level objectives,  supported by a structural, methodical automation of a  core set of capabilities, an approach  distancing requirements from technology  is followed.

While it is certain that the agile manifesto encapsulates many of the usability objectives laid out, in practice the gap between an understanding of value, structure,  and the resulting inadequate code grows, in proportion to effort invested in responding to  “pragmatic”  deadlines.

The point is that less code is better code, and code shaped by converged design, comprehensive reasons for existence (ie. requirements), and method in application,   is even better.

Fortunately there are a number of such  abstractions, and this brings the next point into view.

What constitutes a  useful “converged” abstract model ? It appears that as with the development of the hammer, so it is that in computing there are certain ways of shaping the link between mathematics and usability,  progressing in a certain direction which works.

One such example is the development of the *NIX systems, initially developed in 1970  which after 40+ years are still relevant and used in the majority of current  communication and computing  infrastructures.

Another, and this is standards focused,  is the work done by  OSI, shaped  from  ITU founded in 1865,  resulting in the OSI/Network management forum in 1988 which in turn morphed into TM Forum set of standards widely used today by service providers today as reference for designing communication systems .

This work shows the need for abstractions in structurally solving a range of human problems in communicating information, involving extensive investment in engineering  and software development.

With the coming of internet, and subsequently the web, software tools proliferated in automating standard communication, adding   and information processing functions  (HTML, CSS, Javascript). The IETF governed these new standards, independently  from those practices inherited from  the past.

Applications became less demanding in implementing code  for responsive, interactive software, and less thought  given  to generalizations and principles, as these were set in the standards, and tools.

The problem arose when proliferation of standards, tools and applications led, on the one hand  to  a lack of structure and method in coding practice and secondly to a multiplying and divergent  effect.

It takes  more effort to think carefully about a set of problems, and structure,  leading to a successful solution,  than to implement a program providing immediate results. Additionally, when  proliferation of tools occurs, a lot of energy is invested  in mastering  these  rather than in careful   design and coding practice.

As developers and engineers face new needs, and challenges , they constantly  seek new ways of doing things, as competitors they seek new ways of limiting their opposition through gazumping their technology, in short “cheaper, better, faster” , roughly aligned to cost , quality and efficiency – for the foreseeable future software will proliferate, so the best possible approach is to govern the process of making software, rather than limit a growing process.

Abstract architecture and governance  methods,  interactive modelling & visualization languages,and data representation  can go  a long way in focusing attention  on business and human value, before any code is produced, thus reducing  software proliferation, increasing quality, while  questioning the value of  implementation objectives.

Architecture and abstraction methods exist and provide  cost saving & efficient  solutions to real-life problems – they support & set the stage for users, and stakeholders understanding and setting objectives, and guide   subsequent coding practice.

While in tension with the need to pragmatically deliver results, they guide and shape better, risk-free software  systems, solving real  business problems.

 

 

 

 

 

 

 

 

 

 

Buy versus build

Buy versus  build  is a false dichotomy.

A best of breed / best of suite oriented technology model, the idea that a software product commodity ( appliance),  at the center of operations, with an associated long term, ROI based pricing model,and technology lock-in,  does not always  “stack” up.

There are insufficient grounds for excluding alternatives,  given that a typical increasingly relevant , open source system, has hundreds of  programmers working on it, and is often better managed than an equivalent vendor supplied product.

Programming is the commodity, not the box, and given software availability, and good programmers, its a matter of structuring and managing product, and disciplining delivery  from a software engineering perspective. Respective  costs between a vendor offering ROI, and  an equivalent accelerated development set-up based on open-source may be quantified and compared, although much of the medium term gains are not immediately apparent.

Saving hundreds of thousands in the process, and gaining competitive advantage through on-demand innovation is an objective worthy of pursuit.

Investing in skills and capability may be an outlier  for operators and business in general, but it is a powerful and efficient way of working, if managed well. Not long ago it was business as usual for Service providers, who brought in R&D  in-house while leveraging development partners.They retained an intimate knowledge of the business and worked closely with partners through the development cycle.

An  equivalent  approach of building on  increasingly relevant  community efforts ( ECOMP, TIP), and creating own capability makes sense, and is a step up from having to reinvent the wheel.

Why are major service providers  as AT&T, Deutsche Telecom, Telefonica,  and OTT players engaging in these initiatives  with a view of gaining control and flexibility  over innovation (in creating revenue),  and reducing their costs?

https://www.onap.org/

Each have mutually complementary reasons for doing so, highlighting the fact that a service provider needs more control over software and an OTT provider such as FB more control over the network.

They believe that they can scale up quicker and get to market faster, through participating in an ecosystem, which differentiates  in terms of the services which are being offered. Or are they simply keeping an eye on the competition?

In turn vendors are changing their development model  taking advantage of competition, partnerships, and an alternative ecosystem, noting that they have, in turn, understandably, embraced the open source proposition, they are essentially  assuming the role of integrator, reselling to operators, with value gained in customization, engineering and management components.

Both  approaches are relevant, it is  a matter of assessing costs, versus  technical capability and market drivers,  over an immediate to medium term time-frame, and having the will and interest to make it happen. Certain areas such as those of system management support in the line of MANO stand out as opportunities, directly  competing with  traditional vendor options.

However when all is said and done the issue of accountability has to be taken into account – who carries the responsibility for the decision making process leading to procurement, and subsequent risk in in-house development?

Given the high percentage of failure and loss of value evidenced in multiple package deployments it would appear that software is only one part of the risk factor.

For the moment vendor partners guaranteeing the required level of support while retaining flexibility, and capability over time, appear to lead in preference. We shall see, whether   “a la carte”  prevails over the “best of breed” fixed menu. .

Docker – a “simple” way to develop, contain & port unix multi-process systems

Docker has found its way in simplifying the Unix fork/exec daemon paradigm through creating contained processes, and restricting communication between these via the daemon running on  the host kernel.

This is old technology existing for over 40 years, that is as old as Unix ( Thanks Kernigan and Richie, seriously its not a prank,in spite of rumour, although there are times when it appears to be, obscure as it is ).

Docker daemon  packages fork/exec, share memory, thus  obviating  the need for shared or local  library creation, which    anyone working with Unix will admit has many variants and is complex to implement in parallel processing scenarios. One needs to be the master of the “makefile” to get this right in the traditional sense, and honestly who cares? Interesting.

Given the difficulties in sharing IPC space within Docker wouldn’t it be easier to just write some IPC code libraries ?