Lost in the (code) translation

A recent article in the Atlantic magazine highlighted   aspects of engineering tunnel vision resulting in  proliferation of code, tools, systems, and standards, and failures in their implementation.

There are two distinct threads running through this, firstly the need for simplifying and structuring the number of systems, code base, tools and standards applied interchangeably in solving identical problems, and  achieving greater reliability in doing so.

Secondly, the need for  creating user  responsive capability in developing software solving real-world problems, in industry, health, science, rather than simplifying petty inconveniences, ie. as in “apps” development, for improved delivery services, or social media.

We are reaching  the stage where experimenting with and applying  various tools, resulting in an  identical solutions to  similar problems,  provides indistinct value, unless tempered with an underlying laser like focus on convergence to user needs.

How many coding systems, applications, ecosystems, and standards do we really need to make things happen?

It took over 2 million years to develop the hammer in its present form, but software acceleration is faster,  dispersed and mechanistic in nature, and does not always reflect comparative progress  in usability and function.We may die if avionics fail. We will not go hungry, or freeze,  if we don’t download the next app.

Coding as an intellectual exercise, brings with a narrow mechanistic  focus, offsetting any  cross pollination benefits.

While keeping people in work and a vibrant consulting & training industry in shape, the focus on technology as opposed to applying thought &  rigor to practice, has led to a great deal of inertia in progressing the cause of useful coding, tools, systems, and usability.

This is evident for example in the “agile” development practices seen as panaceas addressing inertia in applying technology to business problems.

Instead of focusing on the drivers, and high level objectives,  supported by a structural, methodical automation of a  core set of capabilities, an approach  distancing requirements from technology  is followed.

While it is certain that the agile manifesto encapsulates many of the usability objectives laid out, in practice the gap between an understanding of value, structure,  and the resulting inadequate code grows, in proportion to effort invested in responding to  “pragmatic”  deadlines.

The point is that less code is better code, and code shaped by converged design, comprehensive reasons for existence (ie. requirements), and method in application,   is even better.

Fortunately there are a number of such  abstractions, and this brings the next point into view.

What constitutes a  useful “converged” abstract model ? It appears that as with the development of the hammer, so it is that in computing there are certain ways of shaping the link between mathematics and usability,  progressing in a certain direction which works.

One such example is the development of the *NIX systems, initially developed in 1970  which after 40+ years are still relevant and used in the majority of current  communication and computing  infrastructures.

Another, and this is standards focused,  is the work done by  OSI, shaped  from  ITU founded in 1865,  resulting in the OSI/Network management forum in 1988 which in turn morphed into TM Forum set of standards widely used today by service providers today as reference for designing communication systems .

This work shows the need for abstractions in structurally solving a range of human problems in communicating information, involving extensive investment in engineering  and software development.

With the coming of internet, and subsequently the web, software tools proliferated in automating standard communication, adding   and information processing functions  (HTML, CSS, Javascript). The IETF governed these new standards, independently  from those practices inherited from  the past.

Applications became less demanding in implementing code  for responsive, interactive software, and less thought  given  to generalizations and principles, as these were set in the standards, and tools.

The problem arose when proliferation of standards, tools and applications led, on the one hand  to  a lack of structure and method in coding practice and secondly to a multiplying and divergent  effect.

It takes  more effort to think carefully about a set of problems, and structure,  leading to a successful solution,  than to implement a program providing immediate results. Additionally, when  proliferation of tools occurs, a lot of energy is invested  in mastering  these  rather than in careful   design and coding practice.

As developers and engineers face new needs, and challenges , they constantly  seek new ways of doing things, as competitors they seek new ways of limiting their opposition through gazumping their technology, in short “cheaper, better, faster” , roughly aligned to cost , quality and efficiency – for the foreseeable future software will proliferate, so the best possible approach is to govern the process of making software, rather than limit a growing process.

Abstract architecture and governance  methods,  interactive modelling & visualization languages,and data representation  can go  a long way in focusing attention  on business and human value, before any code is produced, thus reducing  software proliferation, increasing quality, while  questioning the value of  implementation objectives.

Architecture and abstraction methods exist and provide  cost saving & efficient  solutions to real-life problems – they support & set the stage for users, and stakeholders understanding and setting objectives, and guide   subsequent coding practice.

While in tension with the need to pragmatically deliver results, they guide and shape better, risk-free software  systems, solving real  business problems.

 

 

 

 

 

 

 

 

 

 

Change

Identifying the key incentives & drivers for change is one of the most important and challenging aspects of developing an organizational and technical  strategy.

After developing  a business case to determine drivers and potential benefits, the organization is in a position to focus on developing a change strategy.

The need for change must be clearly defined and accepted by key stakeholders, while the benefits to be derived by the enabling changes  articulated, documented, and accepted.

Benefits may yield both tangible and non-tangible results, while a collaborative approach to defining these prepares and plans the implementation programs to follow.

The first step is to ensure  communication and engagement between sponsors, & stakeholders in producing the key value propositions, and prioritizing these. This approach applies just as much to pure organizational objectives as to technical and engineering goals.

A thorough understanding, or research into present and future conditions ( states )  will help in shaping target drivers and establishing an initial road map for implementation. The social, process, and organizational components which lead to  value IT  shaping a portfolio plan  are discussed  elsewhere,in this discussion focus is on certain methods for developing a change program.

Other than poor execution one of the biggest causes of program failure is lack of real or perceived value and acceptance. Therefore establishing a strong buy-in up front, identifying value and reasons for concern should be a priority. Often, the level of dissatisfaction among staff, loss of revenue, and competitive threats provide sufficient grounds for action, but  relying on experience which may be limited, or fragmented,  does not  produce repeatable  outcomes.

The use of enterprise architecture in defining the details of an operating model for an organization will guide change in formal, structured, and visible manner.

While engaging personalities and complementary personal leadership attributes  make a  difference in  developing and managing  change processes, and they are needed, these alone are no substitute for clarity through  method  in definition and   acceptance  of  common goals especially where organizational  governance is concerned.

Progress against  a well defined baseline needs to be measured,  relative to which controlled changes may be made, with reduced impact to all concerned stakeholders.

Additionally, structure reduces the number of decisions which need to be made in coming to a mutually beneficial conclusion.

The term “architecture” is somewhat loaded as focused on technical aspects, but this is not the intended scope of the change initiative. Technical and engineering aspects will inform and detail a company  operating model,  derived as result of the enterprise or point business drivers and organizational objectives.

Architectures  divorced from operational reality, established in an ivory tower  are not useful in developing transformation objectives.While useful as a guideline to execution, they often take a tactical path to a solution without considering the cause of the problem itself, and the nature of  requirements.

Indeed in certain cases a ready made solution is offered before taking into account all the preceding factors, such as capability, existing state, and organizational inertia. As  an example certain fads drive initiatives, agile, dev-ops, are useful solutions applied without  adequate problem definition. Time is often an issue and a quick-fix imperative overrides analysis, costing,  and planning.

With regard to timing imperatives – a  short period of reflection and structured alignment of objectives with people, and capabilities  will pay off in an accelerated transformation to follow. Modelling, costing and scenario planning on resulting benefits yields an action plan. Governance will address execution and accountability.

As stated in multiple references enterprise architecture is the organizational  structure for integrating and standardizing a company’s operating model. It sets the tone for understanding external and internal business requirements and provides a method for visibility and control. It is not restricted to IT but is informed by and contributes to all aspects of operations, in terms of revenue and cost.

It helps to formulate an operating model in the context of a particular domain. Logistics requirements are different to those of running a municipality although they may share technology and common skill sets.

Recent work with several companies, and exposure to  individual methods of working,  including those developed internally, yield at least four/five  areas in capability management, demand and supply,  governance, namely : (i)business process, (ii)data, information & security, (iii)application integration, (iv)  and technology.

If working with an extended partner network, then we need to include the partner integration methods and processes  ( we get into various aspects of trust in  contract formulation, traceability, and alignment  ).

When informed by an accepted value proposition (Business case ) and well defined drivers these techniques represent powerful tools in resource allocation (who does what ) organizational structure, and capability development, leading into the details of building a foundation for execution, with or without external partners.

Developing an external supplier  engagement, and aligning internally between organizational silos, in absence  of drivers, business benefits, and aligned architecture, may lead to all sorts of communication, funding, and technical issues, compounded by time imperatives, and inertia.

Addressing these aspects up front engages people, identifies costs, provides valuable shared information and drives the selection of an optimal solution.

Change through developing the program for execution  becomes a natural outcome of shared objectives, and value resulting from that change well understood.

[1] Enterprise Architecture as strategy – Harvard business school press

[2] Benefits Management – Delivering value from IS & IT investments – Wiley

[3] TOGAF – An open group standard

[4] Telekinetics – A value & domain  based approach to service provider transformation

Buy versus build

Buy versus  build  is a false dichotomy.

A best of breed / best of suite oriented technology model, the idea that a software product commodity ( appliance),  at the center of operations, with an associated long term, ROI based pricing model,and technology lock-in,  does not always  “stack” up.

There are insufficient grounds for excluding alternatives,  given that a typical increasingly relevant , open source system, has hundreds of  programmers working on it, and is often better managed than an equivalent vendor supplied product.

Programming is the commodity, not the box, and given software availability, and good programmers, its a matter of structuring and managing product, and disciplining delivery  from a software engineering perspective. Respective  costs between a vendor offering ROI, and  an equivalent accelerated development set-up based on open-source may be quantified and compared, although much of the medium term gains are not immediately apparent.

Saving hundreds of thousands in the process, and gaining competitive advantage through on-demand innovation is an objective worthy of pursuit.

Investing in skills and capability may be an outlier  for operators and business in general, but it is a powerful and efficient way of working, if managed well. Not long ago it was business as usual for Service providers, who brought in R&D  in-house while leveraging development partners.They retained an intimate knowledge of the business and worked closely with partners through the development cycle.

An  equivalent  approach of building on  increasingly relevant  community efforts ( ECOMP, TIP), and creating own capability makes sense, and is a step up from having to reinvent the wheel.

Why are major service providers  as AT&T, Deutsche Telecom, Telefonica,  and OTT players engaging in these initiatives  with a view of gaining control and flexibility  over innovation (in creating revenue),  and reducing their costs?

https://www.onap.org/

Each have mutually complementary reasons for doing so, highlighting the fact that a service provider needs more control over software and an OTT provider such as FB more control over the network.

They believe that they can scale up quicker and get to market faster, through participating in an ecosystem, which differentiates  in terms of the services which are being offered. Or are they simply keeping an eye on the competition?

In turn vendors are changing their development model  taking advantage of competition, partnerships, and an alternative ecosystem, noting that they have, in turn, understandably, embraced the open source proposition, they are essentially  assuming the role of integrator, reselling to operators, with value gained in customization, engineering and management components.

Both  approaches are relevant, it is  a matter of assessing costs, versus  technical capability and market drivers,  over an immediate to medium term time-frame, and having the will and interest to make it happen. Certain areas such as those of system management support in the line of MANO stand out as opportunities, directly  competing with  traditional vendor options.

However when all is said and done the issue of accountability has to be taken into account – who carries the responsibility for the decision making process leading to procurement, and subsequent risk in in-house development?

Given the high percentage of failure and loss of value evidenced in multiple package deployments it would appear that software is only one part of the risk factor.

For the moment vendor partners guaranteeing the required level of support while retaining flexibility, and capability over time, appear to lead in preference. We shall see, whether   “a la carte”  prevails over the “best of breed” fixed menu. .