Measuring is knowing – almost

Once upon a time I listened to a talk with this title, given by a CFO of a multi-national service provider.

The intention was to measure activity, cost, and revenue across a sprawling estate so that better decisions could be taken.

Over ten years later and with additional experience of the challenges operators face in addressing so called “digital” initiatives the very subject has come back to mind. The following three risks directly related to measurement have not been addressed presenting an obstacle in decision making and monetization.

  • Lack of performance measurement to drive execution
  • Failure to understand what customers value
  • Inability to extract value from network assets


In fact, with certain exceptions, there is no clear understanding of value flow across an operating company. CSP’s do not measure value across the operation, and towards their customers, despite the hard work done by the teams in each vertical domain, and in quantifying the heavyweight procurement decisions which are necessary in large scale network rollout and the like. Clearly this is not enough. Joining up the dots is optional.

Informed operators are seeking business decision making support through tools, methods and processes  and a thriving consulting business has developed around that. However the nature of the organizational divide across the enterprise preempts visibility, maintaining well known vertical cost centers and spheres of influence. At the same time nimble integrated companies flourish by eating the whale’s dinner, or rather it can be said that as trusted partners their competitive advantage and comparative margin is higher, than that of the host.

The operator’s response has been to throw technology at the perceived problem, initiate high-end, long duration transformation programs, encouraged by vendors, and reduce staffing through outsourcing and other means.

Most of these attempts have failed to deliver the expected value, resulting in increased inertia and complexity with corresponding increase in operating cost in the short term  and reduced ROI overall.

On the other hand agility has been touted as a solution for optimizing demand & supply without taking into account the value which such an approach will bring, given business, organizational and technical requirements and constraints.

I’m suggesting that in order to progress change through a service provider organization the first step would be to identify the value proposition, at a moment in time and in the foreseeable future together with all the variables of uncertainty.

Value can be mapped to investment objectives, and these in turn to current and future operating models, right down to the technology enablers. In doing so there is a great deal of scope for exercising both left and right brain capabilities in planning and innovation across a group of people, the stakeholders.

Smart and nimble integrated companies know this and start with a small core team, well established value drivers, a flexible partner strategy and a clear technology differentiator. Once those principles are established brand and capability will follow.

The point I’m trying to make is that by starting with an understanding of core business value as whole and individually, an operator may improve their key strengths (And identify areas of improvement ) rather than seeking to leverage  arbitrary value creating initiatives, competing with global  portfolios of standardized and established products.

In doing so they can begin to make the necessary internal changes to support increasing levels of automation, and supporting research and development, changes which do not fit neatly into the established vertical functions established over twenty years ago.

The consumer does not care if a product is sliced and diced in a myriad ways, they want a simple, effective, easily understood offer, one which always works everywhere, and for which there are no surprises.

Adding “digital” complexity to the customer experience just adds costs and frustration, without fundamentally improving the user experience, there are products which do this and do it well, and quite frankly keeping up with these is a lost cause.

As for the silver bullet presented by cloud & virtualization while these may reduce complexity on one level, and promote effectiveness, they shift the work elsewhere. Cloud is an improvement tool not a solution to the key problem of gaining value, effectiveness, and efficiency from existing and new assets while delivering a world class service.

One may argue that this concept of value realization is not restricted to service provider operators alone (We can call them Telco’s, it’s ok )  it is part and parcel of any digital business, but the challenges which Telco’s face are unique in that there is an established order, and a regulatory framework ,which must be overcome and influenced before value and revenue  is  gained  through innovation and automation.

The very word “digital” is misleading in that Telco’s led the way in digital capability from the get go, they are already digital and highly capable in delivering an adequate service to millions of subscribers, albeit poorly integrated. M2M has been implemented by Telco’s over 20 years ago and the prime invention of caller identification which set the stage for telephony based services in 1968. The implications of this were huge, for example  leading to a range of applications   in secure authentication for all sorts of applications, including banking.

This invention, and innumerable others did not  occur in a vacuum, or a rigid shareholder driven environment they were fostered by a well-funded integrated partnership between research and industry, scaling across many different organizational silos and interests. But we are speaking of history.

So the second step is to revert to the original ecosystem framework, drop the silos, the organizational structures which did so well in terms of process engineering, specialization, and skills development in the recent past, directly contributing to organizational inertia in the present, and perpetuating the view that as long as our department performs it’s someone else’s responsibility to address the customers complaints, fix faults, or provide replacement modems on time.

As seen from a current perspective Telco is a production line, which can be largely automated,  with a healthy Marketing and R&D component,  skills can and should be interchangeable and transferrable,  with some notable exceptions, and new skills are required to invent and run the latest technologies effectively. We need to formalize an experimental inventive mode while doing solid production engineering, and find the funds to do it with. It will help if we know and can measure what we have and what we want in doing so.

In summary:

  • Identify and measure value collaboratively within and externally to the organization
  • Set up the investment plan according to capability with clearly defined metrics
  • Include an R&D and investment budget
  • Plan and initiate research and development, or find a partner who can
  • Track performance across the organization
  • Increase and extend capability through internal skills development and external ecosystems
  • Drop the silos and clarify the new organization
  • Run the programs
  • Measure results on an ongoing basis on a common framework


An IoT Bill of Materials

Building an IoT BOM takes into account several  areas of interest which can be roughly grouped as follows:

  • The items to be perceived/sensed
  • Physical  interaction layer: manufacturing sensor, design time  & run time equipment, home device sensors
  • The network ( far-edge)  which carries perceived  data for processing, and  interaction messages for controlling  the items to be perceived
  • The near-edge device(s), or cloud interface, which support  processing in the data and control layers as well as grid/cloud computing and software network evaluation
  • Services layer including configuration ( Network and indicator ), user management,
  • Processing and analytics
  • Data extract and integration

All of these components have a certain energy  footprint  to be taken into  account in the design of an IoT system and subsequent BoM relating to the evaluation/measurement, or manufacturing chain.

Once the complete E2E operational footprint is taken into account it shall be possible to offset operational  costs against savings to be gained as value through the implementation.

The IoE/BoM should include the necessary measurement and monitoring components providing capability of  capturing operational information,  and comparing with target metrics.These management components are added as a  sub-set of the total processing components listed above.

Finally, the operational profile should include the people necessary to run the system in an acceptable manner within the stated SLA’s. This is a critical aspect of evaluation in assessing  comparative options in conducting the process in some other manner   ( ie. land based,  versus aerial remote sensing ).

Once the necessary levels of QoS are defined, another pass through the design is necessary to ensure that performance characteristics meet stated objectives, at least on paper/specifications  and based on selected tests.

At this stage a  pilot study to verify operational characteristics, and feedback results can be carried out, ensuring that strategic as well as operational objectives are met, presenting an opportunity of testing measurement and monitoring results &  failure conditions, generally applying operational readiness & security procedures, ensuring the system meets initial specifications.

The verification process can now begin, ensuring that test results are in keeping with specific technical and regulatory  constraints as specified in earlier phases.






IoE Design & Monetization Criteria

IoE is broad, undergoing rapid change in terms of technological capability, while business needs are as yet understandably undefined in the main, with exception certain well  known domains such as industrial automation, utility management, smart city and emergency applications to name a few.

As such we present a number of business drivers and design options understanding that there are certain factors which are key to mid to  long term deployments, in terms of reusability, and extension into existing and emerging technologies. We believe most if not all of these components have a value which may be interpeted in discrete terms, either as money or as an exchange medium. In closing, a section on revenue management identifies some of the opportunities and challenges faced in data  ownership and utilization.

Firstly, the gateway performing the role of communication with  IoT  devices at the network edge has an important role to play in providing a generic business capability for different types of application devices which need to work together in performing a services such as for example the case of a unified NG network consisting of traffic detection/sensor  devices, VOIP,  actuators.

This key component needs to have processing capability in order to perform a number of functions in meeting multiple business and technical objectives, this implies a programmable operating system.

The gateway has multiple capabilities in processing diverse IoT protocols, such as Zigbee, and others, many of which are non standard  – see the following link for details on these

On the Northbound interface gateways should be designed such that they provide a single unified communication protocol at layer 3/4 , which means that they should effectively function as multi-protocol routers, which may be useful in establishing a common “fog” edge network allowing for communication between gateways, as well as back to the home base cloud.

The single northbound interface allows for considerable simplification of the supporting IoT to Cloud control protocol and associated cloud stack. This approach while adding complexity to the gateway design, simplifies rollout and implementation, and substantially reduces operating costs and maintenance.

The next key area of interest is that of  southbound control flow between the cloud and the IoT gateway. While a number of protocols have been developed for this purpose ( MQTT) , it may be possible to use standard SNMP/NETCONF for this purpose , either standalone or overlaid with a messaging system.

Data and control flows  are kept separate with synchronization of device state maintained in the topology structure and persistent storage, expressed for example as a Yang model for NETCONF interaction or encapsulated within an API of choice.

More can be said on the cloud based streaming services which will handle data flow aggregation, correlation and manipulation and subsequent analytics.

Sensor data, derived from multiple sources and lacking structure,  needs be aggregated,  pattern matched, associated and matched with specific indicators for statistical analysis purposes in terms of the application,  and for performance management, in near-real time and subsequently off-line.

Note that in the case of having to control IoT functions data feedback needs to be immediate therefore the aggregation and analysis needs to happen in streaming mode.

There are a number of tools for handling persistence and availability as well as stream processing. In this discussion we focus only on the key areas of interest in terms of handling an IoT network in a similar fashion to a heterogeneous next generation network, noting that capabilities will converge over the next couple years, especially with the advent of software defined networks, micro technology, and increasing use of IP within the device ecosystem.

Every application network will have an initial and subsequent state and a specific configuration. This is stored somewhere and needs to persist topology  information, leading to two distinct requirements, namely those  of storing and managing  the static network layout, initial config,  and service representation, and the dynamic network layout, state and configuration,  handled in an intermediate and distributed data store which interacts with the network, via control and data flows. This latter dynamic network is a mirror of the IoT network state at a particular point in time, and maintains state via updates to and from the control plane.Here we leave a small note in developing further detail on the logical definition of what is both a dynamic and a static representation of a network layout, and how this is physically expressed, it’s a subject of its own,.

As for the virtualization/cloud  aspect it is noted that data increases and decreases in IoT  are inevitable and may well be unpredictable, which business case lends itself naturally to the solution provided by 5G MANO and NFV / SDN like management and control. However not all IoT cases are alike and there may well be no justification for a cloud based  operational framework  in the first place, this is a matter for  up-front evaluation.

Each IoT network may be segmented in a number of ways based on location, APN, service, priority, QoS and other criteria [1]. Therefore  topology provides for a natural guideline to setting up a virtual infrastructure, and initial resource allocation. Each IoT network segment  may be allocated a VNF instance and associated contained service managed by the cloud controller.

The ability to scale up / down and control the IoT may be based on (i)  feedback received from the devices as part of the data collection and indicator processing  cycle,  (ii) from the network control updates as part of normal processing, and (iii)  from monitoring systems or probes.

In summary the following design criteria are key to the successful implementation of an IoT network,all of which may be monetized in some form, namely:

1. Network topology and state, as maintained statically and dynamically

2. Cloud infrastructure allocated to IoT instances and initial scaling of the gateway to cloud processing ratio in terms of resources, network planning, availability

3. Gateway design, build  and selection based on (i) northbound interface standardization, (ii) multi protocol conversion, (iii)filtering and aggregation functions as well as  (iv)  edge network distribution

4. Control and data layer  implemented on the cloud, preferably with dynamic resource allocation per IoT instance grouping

5. Flexible and configurable streaming and data analysis, machine learning for sensor data optimization and feedback

6. Persistent data storage for analytics and operational / service management

7. Monitoring and management functions ( partly achieved by cloud management ) with an embedded FCAPS function within the gateway aggregating device management data, and via external active monitoring ie. probes, NMS.

Revenue Management


Finally we will touch on one of the most important aspects relating to IoE, namely revenue management.This summary is intended to provoke some discussion and is by no means definitive, we will go into details in separate, focused articles.

Recently we addressed a number of business opportunities within several sectors, finding that there was little or no comprehension of how to monetize the service delivery and usage stack across multiple stakeholders and even how to price up our services within their delivery ecosystem.

This applies especially to constructing applications which operate across  public & private clouds and multiple networks across edge, transport and access, where technical complexity across several layers of technology and multiple  stakeholders make the business proposition unclear.

While one can reasonably define platform & network utilization fees, it is more difficult to quantify, measure and price  services offered as part of the IoE application, as a package, offered to the various parties, and ultimately the customer, each with separate business and operating models.

Coming from a background in which we have worked in designing and applying revenue management across multiple heterogeneous ecosystems in the mobile and broadband  B2B, retail and wholesale area, we can identify the following key requirements towards making sense of an IoT revenue model.

The first thing to consider is that traditional centralized revenue management operational models (and systems ) will not easily scale to IoE ecosystems. This is primarily due to the fact that there are several legal and technical data ownership &  trust elements to consider in distributing critical information across multiple estates, owned and operated by separate parties.

The first step,  based on earlier data definition design criteria discussed above is to be able to structurally model static infrastructure, and mediation thereof , this is relatively easy to do given extensive capability for service and resource modelling ( See: ITU/TMF SID, Yang models and generic  data modelling for heterogeneous networks ). Note that an IoE network is not static and may be constantly changing in terms of the interrelationship between devices, gateways and the transport network. This is not far different to operation of a mobile network but, the IoE network has no standard unified operation,as does 4G for example, this is work in progress.

Once services and associated resources are well defined over an initial infrastructure configuration, it is possible to mediate and stream data over a number of the ecosystems ensuring that the supporting methods ( and protocols ) expose usage. There are a number of constraints in doing so namely the fact that IoE data is sessionless in the main, however volume and type of data is available, in primitive form.

When connecting into an IoE ecosystem as a user one may use  existing infrastructure or add infrastructure in the form of new devices. Therefore we can classify either a (i) provisioning request into the IoE revenue model, and a (ii) utilization request as two distinct initiators of the order-to-cash process, followed by potentially  usage measurement for particular devices, or associated processing as evidenced.

The association of device usage to individual users  will prove a challenge as there is currently no explicit method of managing subscriber references as is the case in telephony.

As an operator of the devices, and cloud ( public or private )  back-end there are various ways of monetizing bulk data aggregation and actuation, these functions can be modeled according to different application needs and depending on the value apportioned to the business nature of the application.

For example a safety critical IoT application may prevent losses to life and equipment, through reducing risks in detection and notification. These risks may be offset by the introduction of a fire detection and notification system, such that insurance premiums may be reduced, and conversely applied in the event of non compliance.

This premium rating model has been widely used in actuarial methods for some time as applied to industrial insurance and other areas,  with the advent of big data, risk avoidance can be built into risk signalling, and premium dispersion related  premiums based on operating data profiled over time.

See this article from the Australian actuarial association for further reference:

Naturally access to operational information by third parties implies high levels of security, confidentiality and is open to misuse and abuse.

In summary we have the following business  requirements so far, namely:

  • Mediation of sessionless, heterogeneous information across partners
  • Interfaces to partner providers and operators
  • Understanding and offsetting legal and regulatory constraints in data ownership
  • Association of usage  profiles with usage for monetization and reward
  • Estimating and measuring Application value
  • Risk offset and reward monetization

The second main challenge is the ownership of information, aggregated as part of the IoE processing,  from which all sorts of meaningful information may be extracted, having value over and above that of the original data.

As an example, a typical household user profile for electrical and water usage, individually and over a municipal region,  may be related to the operational and financial optimization of the utilities. In exchange for setting up and providing such to an IoT provider it may be possible to reduce bill costs to a consumer as a reward.

The data they generate may be anonymized and sold on for research and marketing purposes, or massaged into analytic information. But, increasingly, and we would assert, rightfully so, ownership of that data driven information is called into question.

Can the rightful owner be reimbursed for revealing their usage patterns, and by whom, given that the IoT is not yet based on a subscriber model ?

Are intermediary brokers sufficiently secure to ensure that the information does not fall into wrong hands?

Is the utility  service provider entitled to generate revenue from the usage data collected for taxable resource utilization?

These are some of the questions to be answered over the next few years, noting that technology leads regulatory advances,  the landscape is constantly evolving making some issues redundant and bringing others into play.

Technologies such as blockchain are being touted as potential solutions to the trust aspect surrounding monetization , but it is as yet unclear that there is a silver bullet to address these issues discretely.

In all probability there will be a mix of things which come into play all of which require fundamental understanding of privacy law, security, regulation and standardization both local and international, over and above technical and organizational constraints.











Head in the “cloud” feet on the ground

Recently reviewing a   a number of standards, products, technologies with purpose of extending   management systems into the cloud based on criteria of  minimal development, simplicity, reuse, automation, and scalability.

There are several  distinct business models in existence, namely the internet service provider view, otherwise known as Over-the-Top (OTT) by the  communication service provider community ie. Telecom operator or CSP, and the emerging service ecosystem based on medium to  large scale data center capability and associated services.

The IETF has provided the backbone of internet reference standards, and these have been applied with great ingenuity by the major internet service providers, while they have focused on scaling their ecosystems according to large scale and grid computing principles, merging into fully fledged SaaS and infrastructure “cloud” based utility services.

The communication service providers realizing that revenue is systematically being eroded by the so called OTT’s have taken steps to reduce costs  and monetize their infrastructure while entering into partnerships within an admittedly limited regulatory framework.Certainly, the current network operating model is more about cost control than it is about monetization.

Given the costs of rolling out 5G capability in optical fiber and supporting back end systems, it is difficult to see how operators would provide ROI  for these new infrastructure services over time, while essentially functioning as a utility for third parties, and cutting internal costs. The more they cut internally the less capable they become in terms of innovation and service delivery.

So they have taken steps to address this through partnering with innovative vendors in cloudification, emulating internet service providers, associating with open-source communities such as the Linux foundation, and partnering with an extended B2B and MVNO ecosystem.

However they face the major challenge of the network.

Network computing (I/O control &  management)  is one of the most difficult areas for the cloud, but is also one of the greatest opportunities for CSP’s and other potential entrants.  Internet providers do not “do” network although they are certainly positioning for this requirement.

To this end standards bodies such as the Telemanagement forum, the Broadband forum & ETSI have been working to develop network interoperability standards such that the rollout and operational  management of services can be simplified, automated and reused in diverse ways.

However, this work is thwarted by several realities summarized as follows:

  • Time – things are moving fast and playing catch up places a CSP into reactive mode while technology is evolving – this means that solutions developed today may be thrown away tomorrow as obsolete, this aspect places risk on vendor investment as well as on operator technology adoption
  • Existing operational infrastructure implies migration and dual operating modes, which are high risk exercises, impacting quality of service and organizational coherence
  • Regulatory constraints – Make investments uncertain, as it is unclear which services may be offered in the near future and thus which business  priorities need to be developed
  • Complexity of emerging standards – The implementation of 5G with a focus on NFV powered by SDN capability is work in progress, while standards do address only part of the solution ( The network ) leaving  management and control components open to implementation via a complex overlay of off the shelf open-source and development paradigms ( OPNFV, OpenStack, OpenDaylight, ONOS and others )
  • The generalization of the  management layer has been addressed to some extend by the emerging ECOMP and other relevant open source initiatives, and yet there is complexity in integrating this solution to the underlying alternatives for NFV and SDN, as well as to existing operational ecosystems

It is this opinion that progress can be made through iterative efforts in incrementally and iteratively applying sound software engineering principles to generic open  platforms, while focusing on point solutions to discrete business cases.

Leveraging  one or more of the existing and  emerging standards is a definite, at least as a reference point in design, and a utilization of components where these exist.

But ultimately it is a fact that solutions will develop in keeping with market forces, and capabilities, not by standards alone, technology is moving faster than adoption.

Therefore it is key to understand the key design and implementation aspects  which can be leveraged to drive change and readiness in CSP’s capability.

The following items are listed in no particular order, however early analysis and business case definition will to a large extent guide the process :

  • Understanding the size and scale of the business case
  • Modelling the components throughout the system stack
  • Identifying key security and operational constraints
  • Applying Virtualization and open cloud adoption, where feasible
  • Development of  selected “cloud” capabilities for storage and compute nodes – this is feasible given availability of  statistical multiplexing software, but scaling down is more challenging than scaling up
  • Development of a minimal container ecosystem on which VNF’s can run
  • Decomposition of the network control and data transfer functions ( See modelling exercise )
  • Limited application of one or more of the existing NFV environments, integrated with   a virtualized compute and storage stack implemented with some form of containerization
  • Integration and migration of existing infrastructure to a cloudified environment

The exercise is largely one of decomposition, standards based design, open product selection, and pilot development and testing – there is no silver bullet at this stage.

The question who is best positioned to  execute this work given that existing vendors are understandably unwilling to cannibalize their existing licensing model in favor of building open software based ecosystems, while most operators are not set  up for large scale software development. They used to be, but that was a long time ago.

Equally, there is an open field for  service provider entrants & MVNO’s positioned to offer selected services,  based on the virtual/cloud   paradigm, provided that they are ready and able to  invest in  the set-up of the required   ecosystems.

Partnering with an integrator and/or  system vendor who is willing, and able to design and implement for “brownfield”,  integrating existing critical  operations, is key to this process and represents opportunities  for both parties.

Establishing an internal minimal architecture function is a necessity, as is keeping track of internal  road map and prioritization functions, applying impact and risk analysis to the supply chain.

In particular there are certain key differentiators which can be realized as revenue  once high quality of service, virtualized ( not necessarily cloud based ) infrastructure is in place, these need to be qualified in terms of legal and regulatory constraints, privacy and security.

In the next  chapter on this subject, we will continue with feedback on these potential revenue streams,  & constraints thereof.













Lost in the (code) translation

A recent article in the Atlantic magazine highlighted   aspects of engineering tunnel vision resulting in  proliferation of code, tools, systems, and standards, and failures in their implementation.

There are two distinct threads running through this, firstly the need for simplifying and structuring the number of systems, code base, tools and standards applied interchangeably in solving identical problems, and  achieving greater reliability in doing so.

Secondly, the need for  creating user  responsive capability in developing software solving real-world problems, in industry, health, science, rather than simplifying petty inconveniences, ie. as in “apps” development, for improved delivery services, or social media.

We are reaching  the stage where experimenting with and applying  various tools, resulting in an  identical solutions to  similar problems,  provides indistinct value, unless tempered with an underlying laser like focus on convergence to user needs.

How many coding systems, applications, ecosystems, and standards do we really need to make things happen?

It took over 2 million years to develop the hammer in its present form, but software acceleration is faster,  dispersed and mechanistic in nature, and does not always reflect comparative progress  in usability and function.We may die if avionics fail. We will not go hungry, or freeze,  if we don’t download the next app.

Coding as an intellectual exercise, brings with a narrow mechanistic  focus, offsetting any  cross pollination benefits.

While keeping people in work and a vibrant consulting & training industry in shape, the focus on technology as opposed to applying thought &  rigor to practice, has led to a great deal of inertia in progressing the cause of useful coding, tools, systems, and usability.

This is evident for example in the “agile” development practices seen as panaceas addressing inertia in applying technology to business problems.

Instead of focusing on the drivers, and high level objectives,  supported by a structural, methodical automation of a  core set of capabilities, an approach  distancing requirements from technology  is followed.

While it is certain that the agile manifesto encapsulates many of the usability objectives laid out, in practice the gap between an understanding of value, structure,  and the resulting inadequate code grows, in proportion to effort invested in responding to  “pragmatic”  deadlines.

The point is that less code is better code, and code shaped by converged design, comprehensive reasons for existence (ie. requirements), and method in application,   is even better.

Fortunately there are a number of such  abstractions, and this brings the next point into view.

What constitutes a  useful “converged” abstract model ? It appears that as with the development of the hammer, so it is that in computing there are certain ways of shaping the link between mathematics and usability,  progressing in a certain direction which works.

One such example is the development of the *NIX systems, initially developed in 1970  which after 40+ years are still relevant and used in the majority of current  communication and computing  infrastructures.

Another, and this is standards focused,  is the work done by  OSI, shaped  from  ITU founded in 1865,  resulting in the OSI/Network management forum in 1988 which in turn morphed into TM Forum set of standards widely used today by service providers today as reference for designing communication systems .

This work shows the need for abstractions in structurally solving a range of human problems in communicating information, involving extensive investment in engineering  and software development.

With the coming of internet, and subsequently the web, software tools proliferated in automating standard communication, adding   and information processing functions  (HTML, CSS, Javascript). The IETF governed these new standards, independently  from those practices inherited from  the past.

Applications became less demanding in implementing code  for responsive, interactive software, and less thought  given  to generalizations and principles, as these were set in the standards, and tools.

The problem arose when proliferation of standards, tools and applications led, on the one hand  to  a lack of structure and method in coding practice and secondly to a multiplying and divergent  effect.

It takes  more effort to think carefully about a set of problems, and structure,  leading to a successful solution,  than to implement a program providing immediate results. Additionally, when  proliferation of tools occurs, a lot of energy is invested  in mastering  these  rather than in careful   design and coding practice.

As developers and engineers face new needs, and challenges , they constantly  seek new ways of doing things, as competitors they seek new ways of limiting their opposition through gazumping their technology, in short “cheaper, better, faster” , roughly aligned to cost , quality and efficiency – for the foreseeable future software will proliferate, so the best possible approach is to govern the process of making software, rather than limit a growing process.

Abstract architecture and governance  methods,  interactive modelling & visualization languages,and data representation  can go  a long way in focusing attention  on business and human value, before any code is produced, thus reducing  software proliferation, increasing quality, while  questioning the value of  implementation objectives.

Architecture and abstraction methods exist and provide  cost saving & efficient  solutions to real-life problems – they support & set the stage for users, and stakeholders understanding and setting objectives, and guide   subsequent coding practice.

While in tension with the need to pragmatically deliver results, they guide and shape better, risk-free software  systems, solving real  business problems.












Identifying the key incentives & drivers for change is one of the most important and challenging aspects of developing an organizational and technical  strategy.

After developing  a business case to determine drivers and potential benefits, the organization is in a position to focus on developing a change strategy.

The need for change must be clearly defined and accepted by key stakeholders, while the benefits to be derived by the enabling changes  articulated, documented, and accepted.

Benefits may yield both tangible and non-tangible results, while a collaborative approach to defining these prepares and plans the implementation programs to follow.

The first step is to ensure  communication and engagement between sponsors, & stakeholders in producing the key value propositions, and prioritizing these. This approach applies just as much to pure organizational objectives as to technical and engineering goals.

A thorough understanding, or research into present and future conditions ( states )  will help in shaping target drivers and establishing an initial road map for implementation. The social, process, and organizational components which lead to  value IT  shaping a portfolio plan  are discussed  elsewhere,in this discussion focus is on certain methods for developing a change program.

Other than poor execution one of the biggest causes of program failure is lack of real or perceived value and acceptance. Therefore establishing a strong buy-in up front, identifying value and reasons for concern should be a priority. Often, the level of dissatisfaction among staff, loss of revenue, and competitive threats provide sufficient grounds for action, but  relying on experience which may be limited, or fragmented,  does not  produce repeatable  outcomes.

The use of enterprise architecture in defining the details of an operating model for an organization will guide change in formal, structured, and visible manner.

While engaging personalities and complementary personal leadership attributes  make a  difference in  developing and managing  change processes, and they are needed, these alone are no substitute for clarity through  method  in definition and   acceptance  of  common goals especially where organizational  governance is concerned.

Progress against  a well defined baseline needs to be measured,  relative to which controlled changes may be made, with reduced impact to all concerned stakeholders.

Additionally, structure reduces the number of decisions which need to be made in coming to a mutually beneficial conclusion.

The term “architecture” is somewhat loaded as focused on technical aspects, but this is not the intended scope of the change initiative. Technical and engineering aspects will inform and detail a company  operating model,  derived as result of the enterprise or point business drivers and organizational objectives.

Architectures  divorced from operational reality, established in an ivory tower  are not useful in developing transformation objectives.While useful as a guideline to execution, they often take a tactical path to a solution without considering the cause of the problem itself, and the nature of  requirements.

Indeed in certain cases a ready made solution is offered before taking into account all the preceding factors, such as capability, existing state, and organizational inertia. As  an example certain fads drive initiatives, agile, dev-ops, are useful solutions applied without  adequate problem definition. Time is often an issue and a quick-fix imperative overrides analysis, costing,  and planning.

With regard to timing imperatives – a  short period of reflection and structured alignment of objectives with people, and capabilities  will pay off in an accelerated transformation to follow. Modelling, costing and scenario planning on resulting benefits yields an action plan. Governance will address execution and accountability.

As stated in multiple references enterprise architecture is the organizational  structure for integrating and standardizing a company’s operating model. It sets the tone for understanding external and internal business requirements and provides a method for visibility and control. It is not restricted to IT but is informed by and contributes to all aspects of operations, in terms of revenue and cost.

It helps to formulate an operating model in the context of a particular domain. Logistics requirements are different to those of running a municipality although they may share technology and common skill sets.

Recent work with several companies, and exposure to  individual methods of working,  including those developed internally, yield at least four/five  areas in capability management, demand and supply,  governance, namely : (i)business process, (ii)data, information & security, (iii)application integration, (iv)  and technology.

If working with an extended partner network, then we need to include the partner integration methods and processes  ( we get into various aspects of trust in  contract formulation, traceability, and alignment  ).

When informed by an accepted value proposition (Business case ) and well defined drivers these techniques represent powerful tools in resource allocation (who does what ) organizational structure, and capability development, leading into the details of building a foundation for execution, with or without external partners.

Developing an external supplier  engagement, and aligning internally between organizational silos, in absence  of drivers, business benefits, and aligned architecture, may lead to all sorts of communication, funding, and technical issues, compounded by time imperatives, and inertia.

Addressing these aspects up front engages people, identifies costs, provides valuable shared information and drives the selection of an optimal solution.

Change through developing the program for execution  becomes a natural outcome of shared objectives, and value resulting from that change well understood.

[1] Enterprise Architecture as strategy – Harvard business school press

[2] Benefits Management – Delivering value from IS & IT investments – Wiley

[3] TOGAF – An open group standard

[4] Telekinetics – A value & domain  based approach to service provider transformation

Buy versus build

Buy versus  build  is a false dichotomy.

A best of breed / best of suite oriented technology model, the idea that a software product commodity ( appliance),  at the center of operations, with an associated long term, ROI based pricing model,and technology lock-in,  does not always  “stack” up.

There are insufficient grounds for excluding alternatives,  given that a typical increasingly relevant , open source system, has hundreds of  programmers working on it, and is often better managed than an equivalent vendor supplied product.

Programming is the commodity, not the box, and given software availability, and good programmers, its a matter of structuring and managing product, and disciplining delivery  from a software engineering perspective. Respective  costs between a vendor offering ROI, and  an equivalent accelerated development set-up based on open-source may be quantified and compared, although much of the medium term gains are not immediately apparent.

Saving hundreds of thousands in the process, and gaining competitive advantage through on-demand innovation is an objective worthy of pursuit.

Investing in skills and capability may be an outlier  for operators and business in general, but it is a powerful and efficient way of working, if managed well. Not long ago it was business as usual for Service providers, who brought in R&D  in-house while leveraging development partners.They retained an intimate knowledge of the business and worked closely with partners through the development cycle.

An  equivalent  approach of building on  increasingly relevant  community efforts ( ECOMP, TIP), and creating own capability makes sense, and is a step up from having to reinvent the wheel.

Why are major service providers  as AT&T, Deutsche Telecom, Telefonica,  and OTT players engaging in these initiatives  with a view of gaining control and flexibility  over innovation (in creating revenue),  and reducing their costs?

Each have mutually complementary reasons for doing so, highlighting the fact that a service provider needs more control over software and an OTT provider such as FB more control over the network.

They believe that they can scale up quicker and get to market faster, through participating in an ecosystem, which differentiates  in terms of the services which are being offered. Or are they simply keeping an eye on the competition?

In turn vendors are changing their development model  taking advantage of competition, partnerships, and an alternative ecosystem, noting that they have, in turn, understandably, embraced the open source proposition, they are essentially  assuming the role of integrator, reselling to operators, with value gained in customization, engineering and management components.

Both  approaches are relevant, it is  a matter of assessing costs, versus  technical capability and market drivers,  over an immediate to medium term time-frame, and having the will and interest to make it happen. Certain areas such as those of system management support in the line of MANO stand out as opportunities, directly  competing with  traditional vendor options.

However when all is said and done the issue of accountability has to be taken into account – who carries the responsibility for the decision making process leading to procurement, and subsequent risk in in-house development?

Given the high percentage of failure and loss of value evidenced in multiple package deployments it would appear that software is only one part of the risk factor.

For the moment vendor partners guaranteeing the required level of support while retaining flexibility, and capability over time, appear to lead in preference. We shall see, whether   “a la carte”  prevails over the “best of breed” fixed menu. .

Docker – a “simple” way to develop, contain & port unix multi-process systems

Docker has found its way in simplifying the Unix fork/exec daemon paradigm through creating contained processes, and restricting communication between these via the daemon running on  the host kernel.

This is old technology existing for over 40 years, that is as old as Unix ( Thanks Kernigan and Richie, seriously its not a prank,in spite of rumour, although there are times when it appears to be, obscure as it is ).

Docker daemon  packages fork/exec, share memory, thus  obviating  the need for shared or local  library creation, which    anyone working with Unix will admit has many variants and is complex to implement in parallel processing scenarios. One needs to be the master of the “makefile” to get this right in the traditional sense, and honestly who cares? Interesting.

Given the difficulties in sharing IPC space within Docker wouldn’t it be easier to just write some IPC code libraries ?

Business Mapping For Value – Part 1

This is the first of a three part summary describing a method of gaining understanding of current business objectives and translating these into observable & potentially  measurable benefits, such that these may be used to line up technology initiatives. It goes without saying that this is a structure which may be sliced and diced, and expanded  to suit a particular need and strategic objective such as for example retrofitting a business, acquiring new technology, or evaluating a merger from a technology perspective. This approach has been drawn from concepts derived from methodologies such as the Cranfield Value based IT, and standard program management (PMI)  methods.

Another key aspect is that measurement is welcome but incidental to the process, looking for numbers on observable current and anticipated futures is more important than finding them, as the work done in this regard helps to shape an understanding of the business and the corresponding technology. Don’t get too hung up on that, although a numeric quantity is always better than a speculation.

Firstly, apply business strategy  to IT investments using domain specific references such as standards, competitor  profiles and market drivers, to structure  the case . In the case of communication service providers,  the TMF ( Telemanagement forum)  may be used in developing the business architecture as reference, together with current threats, opportunities and capabilities. Essentially build an Enterprise architecture.

Identify  benefits based on the objectives documented and agreed with stakeholders   ( cannot stress this enough, the agreement part!)  through the previous architecture development process

Now that some benefits are derived we are ready to go the next step, that of mapping the benefits to the technology both current and new or proposed.