IoE Design & Monetization Criteria

IoE is broad, undergoing rapid change in terms of technological capability, while business needs are as yet understandably undefined in the main, with exception certain well  known domains such as industrial automation, utility management, smart city and emergency applications to name a few.

As such we present a number of business drivers and design options understanding that there are certain factors which are key to mid to  long term deployments, in terms of reusability, and extension into existing and emerging technologies. We believe most if not all of these components have a value which may be interpeted in discrete terms, either as money or as an exchange medium. In closing, a section on revenue management identifies some of the opportunities and challenges faced in data  ownership and utilization.

Firstly, the gateway performing the role of communication with  IoT  devices at the network edge has an important role to play in providing a generic business capability for different types of application devices which need to work together in performing a services such as for example the case of a unified NG network consisting of traffic detection/sensor  devices, VOIP,  actuators.

This key component needs to have processing capability in order to perform a number of functions in meeting multiple business and technical objectives, this implies a programmable operating system.

The gateway has multiple capabilities in processing diverse IoT protocols, such as Zigbee, and others, many of which are non standard  – see the following link for details on these    http://telekinetics.eu/wp/protocols/

On the Northbound interface gateways should be designed such that they provide a single unified communication protocol at layer 3/4 , which means that they should effectively function as multi-protocol routers, which may be useful in establishing a common “fog” edge network allowing for communication between gateways, as well as back to the home base cloud.

The single northbound interface allows for considerable simplification of the supporting IoT to Cloud control protocol and associated cloud stack. This approach while adding complexity to the gateway design, simplifies rollout and implementation, and substantially reduces operating costs and maintenance.

The next key area of interest is that of  southbound control flow between the cloud and the IoT gateway. While a number of protocols have been developed for this purpose ( MQTT) , it may be possible to use standard SNMP/NETCONF for this purpose , either standalone or overlaid with a messaging system.

Data and control flows  are kept separate with synchronization of device state maintained in the topology structure and persistent storage, expressed for example as a Yang model for NETCONF interaction or encapsulated within an API of choice.

More can be said on the cloud based streaming services which will handle data flow aggregation, correlation and manipulation and subsequent analytics.

Sensor data, derived from multiple sources and lacking structure,  needs be aggregated,  pattern matched, associated and matched with specific indicators for statistical analysis purposes in terms of the application,  and for performance management, in near-real time and subsequently off-line.

Note that in the case of having to control IoT functions data feedback needs to be immediate therefore the aggregation and analysis needs to happen in streaming mode.

There are a number of tools for handling persistence and availability as well as stream processing. In this discussion we focus only on the key areas of interest in terms of handling an IoT network in a similar fashion to a heterogeneous next generation network, noting that capabilities will converge over the next couple years, especially with the advent of software defined networks, micro technology, and increasing use of IP within the device ecosystem.

Every application network will have an initial and subsequent state and a specific configuration. This is stored somewhere and needs to persist topology  information, leading to two distinct requirements, namely those  of storing and managing  the static network layout, initial config,  and service representation, and the dynamic network layout, state and configuration,  handled in an intermediate and distributed data store which interacts with the network, via control and data flows. This latter dynamic network is a mirror of the IoT network state at a particular point in time, and maintains state via updates to and from the control plane.Here we leave a small note in developing further detail on the logical definition of what is both a dynamic and a static representation of a network layout, and how this is physically expressed, it’s a subject of its own,.

As for the virtualization/cloud  aspect it is noted that data increases and decreases in IoT  are inevitable and may well be unpredictable, which business case lends itself naturally to the solution provided by 5G MANO and NFV / SDN like management and control. However not all IoT cases are alike and there may well be no justification for a cloud based  operational framework  in the first place, this is a matter for  up-front evaluation.

Each IoT network may be segmented in a number of ways based on location, APN, service, priority, QoS and other criteria [1]. Therefore  topology provides for a natural guideline to setting up a virtual infrastructure, and initial resource allocation. Each IoT network segment  may be allocated a VNF instance and associated contained service managed by the cloud controller.

The ability to scale up / down and control the IoT may be based on (i)  feedback received from the devices as part of the data collection and indicator processing  cycle,  (ii) from the network control updates as part of normal processing, and (iii)  from monitoring systems or probes.

In summary the following design criteria are key to the successful implementation of an IoT network,all of which may be monetized in some form, namely:

1. Network topology and state, as maintained statically and dynamically

2. Cloud infrastructure allocated to IoT instances and initial scaling of the gateway to cloud processing ratio in terms of resources, network planning, availability

3. Gateway design, build  and selection based on (i) northbound interface standardization, (ii) multi protocol conversion, (iii)filtering and aggregation functions as well as  (iv)  edge network distribution

4. Control and data layer  implemented on the cloud, preferably with dynamic resource allocation per IoT instance grouping

5. Flexible and configurable streaming and data analysis, machine learning for sensor data optimization and feedback

6. Persistent data storage for analytics and operational / service management

7. Monitoring and management functions ( partly achieved by cloud management ) with an embedded FCAPS function within the gateway aggregating device management data, and via external active monitoring ie. probes, NMS.

Revenue Management

 

Finally we will touch on one of the most important aspects relating to IoE, namely revenue management.This summary is intended to provoke some discussion and is by no means definitive, we will go into details in separate, focused articles.

Recently we addressed a number of business opportunities within several sectors, finding that there was little or no comprehension of how to monetize the service delivery and usage stack across multiple stakeholders and even how to price up our services within their delivery ecosystem.

This applies especially to constructing applications which operate across  public & private clouds and multiple networks across edge, transport and access, where technical complexity across several layers of technology and multiple  stakeholders make the business proposition unclear.

While one can reasonably define platform & network utilization fees, it is more difficult to quantify, measure and price  services offered as part of the IoE application, as a package, offered to the various parties, and ultimately the customer, each with separate business and operating models.

Coming from a background in which we have worked in designing and applying revenue management across multiple heterogeneous ecosystems in the mobile and broadband  B2B, retail and wholesale area, we can identify the following key requirements towards making sense of an IoT revenue model.

The first thing to consider is that traditional centralized revenue management operational models (and systems ) will not easily scale to IoE ecosystems. This is primarily due to the fact that there are several legal and technical data ownership &  trust elements to consider in distributing critical information across multiple estates, owned and operated by separate parties.

The first step,  based on earlier data definition design criteria discussed above is to be able to structurally model static infrastructure, and mediation thereof , this is relatively easy to do given extensive capability for service and resource modelling ( See: ITU/TMF SID, Yang models and generic  data modelling for heterogeneous networks ). Note that an IoE network is not static and may be constantly changing in terms of the interrelationship between devices, gateways and the transport network. This is not far different to operation of a mobile network but, the IoE network has no standard unified operation,as does 4G for example, this is work in progress.

Once services and associated resources are well defined over an initial infrastructure configuration, it is possible to mediate and stream data over a number of the ecosystems ensuring that the supporting methods ( and protocols ) expose usage. There are a number of constraints in doing so namely the fact that IoE data is sessionless in the main, however volume and type of data is available, in primitive form.

When connecting into an IoE ecosystem as a user one may use  existing infrastructure or add infrastructure in the form of new devices. Therefore we can classify either a (i) provisioning request into the IoE revenue model, and a (ii) utilization request as two distinct initiators of the order-to-cash process, followed by potentially  usage measurement for particular devices, or associated processing as evidenced.

The association of device usage to individual users  will prove a challenge as there is currently no explicit method of managing subscriber references as is the case in telephony.

As an operator of the devices, and cloud ( public or private )  back-end there are various ways of monetizing bulk data aggregation and actuation, these functions can be modeled according to different application needs and depending on the value apportioned to the business nature of the application.

For example a safety critical IoT application may prevent losses to life and equipment, through reducing risks in detection and notification. These risks may be offset by the introduction of a fire detection and notification system, such that insurance premiums may be reduced, and conversely applied in the event of non compliance.

This premium rating model has been widely used in actuarial methods for some time as applied to industrial insurance and other areas,  with the advent of big data, risk avoidance can be built into risk signalling, and premium dispersion related  premiums based on operating data profiled over time.

See this article from the Australian actuarial association for further reference:  https://actuaries.asn.au/Library/Opinion/2016/BIGDATAGPWEB.pdf

Naturally access to operational information by third parties implies high levels of security, confidentiality and is open to misuse and abuse.

In summary we have the following business  requirements so far, namely:

  • Mediation of sessionless, heterogeneous information across partners
  • Interfaces to partner providers and operators
  • Understanding and offsetting legal and regulatory constraints in data ownership
  • Association of usage  profiles with usage for monetization and reward
  • Estimating and measuring Application value
  • Risk offset and reward monetization

The second main challenge is the ownership of information, aggregated as part of the IoE processing,  from which all sorts of meaningful information may be extracted, having value over and above that of the original data.

As an example, a typical household user profile for electrical and water usage, individually and over a municipal region,  may be related to the operational and financial optimization of the utilities. In exchange for setting up and providing such to an IoT provider it may be possible to reduce bill costs to a consumer as a reward.

The data they generate may be anonymized and sold on for research and marketing purposes, or massaged into analytic information. But, increasingly, and we would assert, rightfully so, ownership of that data driven information is called into question.

Can the rightful owner be reimbursed for revealing their usage patterns, and by whom, given that the IoT is not yet based on a subscriber model ?

Are intermediary brokers sufficiently secure to ensure that the information does not fall into wrong hands?

Is the utility  service provider entitled to generate revenue from the usage data collected for taxable resource utilization?

These are some of the questions to be answered over the next few years, noting that technology leads regulatory advances,  the landscape is constantly evolving making some issues redundant and bringing others into play.

Technologies such as blockchain are being touted as potential solutions to the trust aspect surrounding monetization , but it is as yet unclear that there is a silver bullet to address these issues discretely.

In all probability there will be a mix of things which come into play all of which require fundamental understanding of privacy law, security, regulation and standardization both local and international, over and above technical and organizational constraints.

 

 

 

 

 

 

 

 

 

 

Head in the “cloud” feet on the ground

Recently reviewing a   a number of standards, products, technologies with purpose of extending   management systems into the cloud based on criteria of  minimal development, simplicity, reuse, automation, and scalability.

There are several  distinct business models in existence, namely the internet service provider view, otherwise known as Over-the-Top (OTT) by the  communication service provider community ie. Telecom operator or CSP, and the emerging service ecosystem based on medium to  large scale data center capability and associated services.

The IETF has provided the backbone of internet reference standards, and these have been applied with great ingenuity by the major internet service providers, while they have focused on scaling their ecosystems according to large scale and grid computing principles, merging into fully fledged SaaS and infrastructure “cloud” based utility services.

The communication service providers realizing that revenue is systematically being eroded by the so called OTT’s have taken steps to reduce costs  and monetize their infrastructure while entering into partnerships within an admittedly limited regulatory framework.Certainly, the current network operating model is more about cost control than it is about monetization.

Given the costs of rolling out 5G capability in optical fiber and supporting back end systems, it is difficult to see how operators would provide ROI  for these new infrastructure services over time, while essentially functioning as a utility for third parties, and cutting internal costs. The more they cut internally the less capable they become in terms of innovation and service delivery.

So they have taken steps to address this through partnering with innovative vendors in cloudification, emulating internet service providers, associating with open-source communities such as the Linux foundation, and partnering with an extended B2B and MVNO ecosystem.

However they face the major challenge of the network.

Network computing (I/O control &  management)  is one of the most difficult areas for the cloud, but is also one of the greatest opportunities for CSP’s and other potential entrants.  Internet providers do not “do” network although they are certainly positioning for this requirement.

To this end standards bodies such as the Telemanagement forum, the Broadband forum & ETSI have been working to develop network interoperability standards such that the rollout and operational  management of services can be simplified, automated and reused in diverse ways.

However, this work is thwarted by several realities summarized as follows:

  • Time – things are moving fast and playing catch up places a CSP into reactive mode while technology is evolving – this means that solutions developed today may be thrown away tomorrow as obsolete, this aspect places risk on vendor investment as well as on operator technology adoption
  • Existing operational infrastructure implies migration and dual operating modes, which are high risk exercises, impacting quality of service and organizational coherence
  • Regulatory constraints – Make investments uncertain, as it is unclear which services may be offered in the near future and thus which business  priorities need to be developed
  • Complexity of emerging standards – The implementation of 5G with a focus on NFV powered by SDN capability is work in progress, while standards do address only part of the solution ( The network ) leaving  management and control components open to implementation via a complex overlay of off the shelf open-source and development paradigms ( OPNFV, OpenStack, OpenDaylight, ONOS and others )
  • The generalization of the  management layer has been addressed to some extend by the emerging ECOMP and other relevant open source initiatives, and yet there is complexity in integrating this solution to the underlying alternatives for NFV and SDN, as well as to existing operational ecosystems

It is this opinion that progress can be made through iterative efforts in incrementally and iteratively applying sound software engineering principles to generic open  platforms, while focusing on point solutions to discrete business cases.

Leveraging  one or more of the existing and  emerging standards is a definite, at least as a reference point in design, and a utilization of components where these exist.

But ultimately it is a fact that solutions will develop in keeping with market forces, and capabilities, not by standards alone, technology is moving faster than adoption.

Therefore it is key to understand the key design and implementation aspects  which can be leveraged to drive change and readiness in CSP’s capability.

The following items are listed in no particular order, however early analysis and business case definition will to a large extent guide the process :

  • Understanding the size and scale of the business case
  • Modelling the components throughout the system stack
  • Identifying key security and operational constraints
  • Applying Virtualization and open cloud adoption, where feasible
  • Development of  selected “cloud” capabilities for storage and compute nodes – this is feasible given availability of  statistical multiplexing software, but scaling down is more challenging than scaling up
  • Development of a minimal container ecosystem on which VNF’s can run
  • Decomposition of the network control and data transfer functions ( See modelling exercise )
  • Limited application of one or more of the existing NFV environments, integrated with   a virtualized compute and storage stack implemented with some form of containerization
  • Integration and migration of existing infrastructure to a cloudified environment

The exercise is largely one of decomposition, standards based design, open product selection, and pilot development and testing – there is no silver bullet at this stage.

The question who is best positioned to  execute this work given that existing vendors are understandably unwilling to cannibalize their existing licensing model in favor of building open software based ecosystems, while most operators are not set  up for large scale software development. They used to be, but that was a long time ago.

Equally, there is an open field for  service provider entrants & MVNO’s positioned to offer selected services,  based on the virtual/cloud   paradigm, provided that they are ready and able to  invest in  the set-up of the required   ecosystems.

Partnering with an integrator and/or  system vendor who is willing, and able to design and implement for “brownfield”,  integrating existing critical  operations, is key to this process and represents opportunities  for both parties.

Establishing an internal minimal architecture function is a necessity, as is keeping track of internal  road map and prioritization functions, applying impact and risk analysis to the supply chain.

In particular there are certain key differentiators which can be realized as revenue  once high quality of service, virtualized ( not necessarily cloud based ) infrastructure is in place, these need to be qualified in terms of legal and regulatory constraints, privacy and security.

In the next  chapter on this subject, we will continue with feedback on these potential revenue streams,  & constraints thereof.

 

 

 

 

 

 

 

 

 

 

 

 

Lost in the (code) translation

A recent article in the Atlantic magazine highlighted   aspects of engineering tunnel vision resulting in  proliferation of code, tools, systems, and standards, and failures in their implementation.

There are two distinct threads running through this, firstly the need for simplifying and structuring the number of systems, code base, tools and standards applied interchangeably in solving identical problems, and  achieving greater reliability in doing so.

Secondly, the need for  creating user  responsive capability in developing software solving real-world problems, in industry, health, science, rather than simplifying petty inconveniences, ie. as in “apps” development, for improved delivery services, or social media.

We are reaching  the stage where experimenting with and applying  various tools, resulting in an  identical solutions to  similar problems,  provides indistinct value, unless tempered with an underlying laser like focus on convergence to user needs.

How many coding systems, applications, ecosystems, and standards do we really need to make things happen?

It took over 2 million years to develop the hammer in its present form, but software acceleration is faster,  dispersed and mechanistic in nature, and does not always reflect comparative progress  in usability and function.We may die if avionics fail. We will not go hungry, or freeze,  if we don’t download the next app.

Coding as an intellectual exercise, brings with a narrow mechanistic  focus, offsetting any  cross pollination benefits.

While keeping people in work and a vibrant consulting & training industry in shape, the focus on technology as opposed to applying thought &  rigor to practice, has led to a great deal of inertia in progressing the cause of useful coding, tools, systems, and usability.

This is evident for example in the “agile” development practices seen as panaceas addressing inertia in applying technology to business problems.

Instead of focusing on the drivers, and high level objectives,  supported by a structural, methodical automation of a  core set of capabilities, an approach  distancing requirements from technology  is followed.

While it is certain that the agile manifesto encapsulates many of the usability objectives laid out, in practice the gap between an understanding of value, structure,  and the resulting inadequate code grows, in proportion to effort invested in responding to  “pragmatic”  deadlines.

The point is that less code is better code, and code shaped by converged design, comprehensive reasons for existence (ie. requirements), and method in application,   is even better.

Fortunately there are a number of such  abstractions, and this brings the next point into view.

What constitutes a  useful “converged” abstract model ? It appears that as with the development of the hammer, so it is that in computing there are certain ways of shaping the link between mathematics and usability,  progressing in a certain direction which works.

One such example is the development of the *NIX systems, initially developed in 1970  which after 40+ years are still relevant and used in the majority of current  communication and computing  infrastructures.

Another, and this is standards focused,  is the work done by  OSI, shaped  from  ITU founded in 1865,  resulting in the OSI/Network management forum in 1988 which in turn morphed into TM Forum set of standards widely used today by service providers today as reference for designing communication systems .

This work shows the need for abstractions in structurally solving a range of human problems in communicating information, involving extensive investment in engineering  and software development.

With the coming of internet, and subsequently the web, software tools proliferated in automating standard communication, adding   and information processing functions  (HTML, CSS, Javascript). The IETF governed these new standards, independently  from those practices inherited from  the past.

Applications became less demanding in implementing code  for responsive, interactive software, and less thought  given  to generalizations and principles, as these were set in the standards, and tools.

The problem arose when proliferation of standards, tools and applications led, on the one hand  to  a lack of structure and method in coding practice and secondly to a multiplying and divergent  effect.

It takes  more effort to think carefully about a set of problems, and structure,  leading to a successful solution,  than to implement a program providing immediate results. Additionally, when  proliferation of tools occurs, a lot of energy is invested  in mastering  these  rather than in careful   design and coding practice.

As developers and engineers face new needs, and challenges , they constantly  seek new ways of doing things, as competitors they seek new ways of limiting their opposition through gazumping their technology, in short “cheaper, better, faster” , roughly aligned to cost , quality and efficiency – for the foreseeable future software will proliferate, so the best possible approach is to govern the process of making software, rather than limit a growing process.

Abstract architecture and governance  methods,  interactive modelling & visualization languages,and data representation  can go  a long way in focusing attention  on business and human value, before any code is produced, thus reducing  software proliferation, increasing quality, while  questioning the value of  implementation objectives.

Architecture and abstraction methods exist and provide  cost saving & efficient  solutions to real-life problems – they support & set the stage for users, and stakeholders understanding and setting objectives, and guide   subsequent coding practice.

While in tension with the need to pragmatically deliver results, they guide and shape better, risk-free software  systems, solving real  business problems.

 

 

 

 

 

 

 

 

 

 

Change

Identifying the key incentives & drivers for change is one of the most important and challenging aspects of developing an organizational and technical  strategy.

After developing  a business case to determine drivers and potential benefits, the organization is in a position to focus on developing a change strategy.

The need for change must be clearly defined and accepted by key stakeholders, while the benefits to be derived by the enabling changes  articulated, documented, and accepted.

Benefits may yield both tangible and non-tangible results, while a collaborative approach to defining these prepares and plans the implementation programs to follow.

The first step is to ensure  communication and engagement between sponsors, & stakeholders in producing the key value propositions, and prioritizing these. This approach applies just as much to pure organizational objectives as to technical and engineering goals.

A thorough understanding, or research into present and future conditions ( states )  will help in shaping target drivers and establishing an initial road map for implementation. The social, process, and organizational components which lead to  value IT  shaping a portfolio plan  are discussed  elsewhere,in this discussion focus is on certain methods for developing a change program.

Other than poor execution one of the biggest causes of program failure is lack of real or perceived value and acceptance. Therefore establishing a strong buy-in up front, identifying value and reasons for concern should be a priority. Often, the level of dissatisfaction among staff, loss of revenue, and competitive threats provide sufficient grounds for action, but  relying on experience which may be limited, or fragmented,  does not  produce repeatable  outcomes.

The use of enterprise architecture in defining the details of an operating model for an organization will guide change in formal, structured, and visible manner.

While engaging personalities and complementary personal leadership attributes  make a  difference in  developing and managing  change processes, and they are needed, these alone are no substitute for clarity through  method  in definition and   acceptance  of  common goals especially where organizational  governance is concerned.

Progress against  a well defined baseline needs to be measured,  relative to which controlled changes may be made, with reduced impact to all concerned stakeholders.

Additionally, structure reduces the number of decisions which need to be made in coming to a mutually beneficial conclusion.

The term “architecture” is somewhat loaded as focused on technical aspects, but this is not the intended scope of the change initiative. Technical and engineering aspects will inform and detail a company  operating model,  derived as result of the enterprise or point business drivers and organizational objectives.

Architectures  divorced from operational reality, established in an ivory tower  are not useful in developing transformation objectives.While useful as a guideline to execution, they often take a tactical path to a solution without considering the cause of the problem itself, and the nature of  requirements.

Indeed in certain cases a ready made solution is offered before taking into account all the preceding factors, such as capability, existing state, and organizational inertia. As  an example certain fads drive initiatives, agile, dev-ops, are useful solutions applied without  adequate problem definition. Time is often an issue and a quick-fix imperative overrides analysis, costing,  and planning.

With regard to timing imperatives – a  short period of reflection and structured alignment of objectives with people, and capabilities  will pay off in an accelerated transformation to follow. Modelling, costing and scenario planning on resulting benefits yields an action plan. Governance will address execution and accountability.

As stated in multiple references enterprise architecture is the organizational  structure for integrating and standardizing a company’s operating model. It sets the tone for understanding external and internal business requirements and provides a method for visibility and control. It is not restricted to IT but is informed by and contributes to all aspects of operations, in terms of revenue and cost.

It helps to formulate an operating model in the context of a particular domain. Logistics requirements are different to those of running a municipality although they may share technology and common skill sets.

Recent work with several companies, and exposure to  individual methods of working,  including those developed internally, yield at least four/five  areas in capability management, demand and supply,  governance, namely : (i)business process, (ii)data, information & security, (iii)application integration, (iv)  and technology.

If working with an extended partner network, then we need to include the partner integration methods and processes  ( we get into various aspects of trust in  contract formulation, traceability, and alignment  ).

When informed by an accepted value proposition (Business case ) and well defined drivers these techniques represent powerful tools in resource allocation (who does what ) organizational structure, and capability development, leading into the details of building a foundation for execution, with or without external partners.

Developing an external supplier  engagement, and aligning internally between organizational silos, in absence  of drivers, business benefits, and aligned architecture, may lead to all sorts of communication, funding, and technical issues, compounded by time imperatives, and inertia.

Addressing these aspects up front engages people, identifies costs, provides valuable shared information and drives the selection of an optimal solution.

Change through developing the program for execution  becomes a natural outcome of shared objectives, and value resulting from that change well understood.

[1] Enterprise Architecture as strategy – Harvard business school press

[2] Benefits Management – Delivering value from IS & IT investments – Wiley

[3] TOGAF – An open group standard

[4] Telekinetics – A value & domain  based approach to service provider transformation

Business Mapping For Value – Part 1

This is the first of a three part summary describing a method of gaining understanding of current business objectives and translating these into observable & potentially  measurable benefits, such that these may be used to line up technology initiatives. It goes without saying that this is a structure which may be sliced and diced, and expanded  to suit a particular need and strategic objective such as for example retrofitting a business, acquiring new technology, or evaluating a merger from a technology perspective. This approach has been drawn from concepts derived from methodologies such as the Cranfield Value based IT, and standard program management (PMI)  methods.

Another key aspect is that measurement is welcome but incidental to the process, looking for numbers on observable current and anticipated futures is more important than finding them, as the work done in this regard helps to shape an understanding of the business and the corresponding technology. Don’t get too hung up on that, although a numeric quantity is always better than a speculation.

Firstly, apply business strategy  to IT investments using domain specific references such as standards, competitor  profiles and market drivers, to structure  the case . In the case of communication service providers,  the TMF ( Telemanagement forum)  may be used in developing the business architecture as reference, together with current threats, opportunities and capabilities. Essentially build an Enterprise architecture.

Identify  benefits based on the objectives documented and agreed with stakeholders   ( cannot stress this enough, the agreement part!)  through the previous architecture development process

Now that some benefits are derived we are ready to go the next step, that of mapping the benefits to the technology both current and new or proposed.