IoE Design & Monetization Criteria

IoE is broad, undergoing rapid change in terms of technological capability, while business needs are as yet understandably undefined in the main, with exception certain well  known domains such as industrial automation, utility management, smart city and emergency applications to name a few.

As such we present a number of business drivers and design options understanding that there are certain factors which are key to mid to  long term deployments, in terms of reusability, and extension into existing and emerging technologies. We believe most if not all of these components have a value which may be interpeted in discrete terms, either as money or as an exchange medium. In closing, a section on revenue management identifies some of the opportunities and challenges faced in data  ownership and utilization.

Firstly, the gateway performing the role of communication with  IoT  devices at the network edge has an important role to play in providing a generic business capability for different types of application devices which need to work together in performing a services such as for example the case of a unified NG network consisting of traffic detection/sensor  devices, VOIP,  actuators.

This key component needs to have processing capability in order to perform a number of functions in meeting multiple business and technical objectives, this implies a programmable operating system.

The gateway has multiple capabilities in processing diverse IoT protocols, such as Zigbee, and others, many of which are non standard  – see the following link for details on these    http://telekinetics.eu/wp/protocols/

On the Northbound interface gateways should be designed such that they provide a single unified communication protocol at layer 3/4 , which means that they should effectively function as multi-protocol routers, which may be useful in establishing a common “fog” edge network allowing for communication between gateways, as well as back to the home base cloud.

The single northbound interface allows for considerable simplification of the supporting IoT to Cloud control protocol and associated cloud stack. This approach while adding complexity to the gateway design, simplifies rollout and implementation, and substantially reduces operating costs and maintenance.

The next key area of interest is that of  southbound control flow between the cloud and the IoT gateway. While a number of protocols have been developed for this purpose ( MQTT) , it may be possible to use standard SNMP/NETCONF for this purpose , either standalone or overlaid with a messaging system.

Data and control flows  are kept separate with synchronization of device state maintained in the topology structure and persistent storage, expressed for example as a Yang model for NETCONF interaction or encapsulated within an API of choice.

More can be said on the cloud based streaming services which will handle data flow aggregation, correlation and manipulation and subsequent analytics.

Sensor data, derived from multiple sources and lacking structure,  needs be aggregated,  pattern matched, associated and matched with specific indicators for statistical analysis purposes in terms of the application,  and for performance management, in near-real time and subsequently off-line.

Note that in the case of having to control IoT functions data feedback needs to be immediate therefore the aggregation and analysis needs to happen in streaming mode.

There are a number of tools for handling persistence and availability as well as stream processing. In this discussion we focus only on the key areas of interest in terms of handling an IoT network in a similar fashion to a heterogeneous next generation network, noting that capabilities will converge over the next couple years, especially with the advent of software defined networks, micro technology, and increasing use of IP within the device ecosystem.

Every application network will have an initial and subsequent state and a specific configuration. This is stored somewhere and needs to persist topology  information, leading to two distinct requirements, namely those  of storing and managing  the static network layout, initial config,  and service representation, and the dynamic network layout, state and configuration,  handled in an intermediate and distributed data store which interacts with the network, via control and data flows. This latter dynamic network is a mirror of the IoT network state at a particular point in time, and maintains state via updates to and from the control plane.Here we leave a small note in developing further detail on the logical definition of what is both a dynamic and a static representation of a network layout, and how this is physically expressed, it’s a subject of its own,.

As for the virtualization/cloud  aspect it is noted that data increases and decreases in IoT  are inevitable and may well be unpredictable, which business case lends itself naturally to the solution provided by 5G MANO and NFV / SDN like management and control. However not all IoT cases are alike and there may well be no justification for a cloud based  operational framework  in the first place, this is a matter for  up-front evaluation.

Each IoT network may be segmented in a number of ways based on location, APN, service, priority, QoS and other criteria [1]. Therefore  topology provides for a natural guideline to setting up a virtual infrastructure, and initial resource allocation. Each IoT network segment  may be allocated a VNF instance and associated contained service managed by the cloud controller.

The ability to scale up / down and control the IoT may be based on (i)  feedback received from the devices as part of the data collection and indicator processing  cycle,  (ii) from the network control updates as part of normal processing, and (iii)  from monitoring systems or probes.

In summary the following design criteria are key to the successful implementation of an IoT network,all of which may be monetized in some form, namely:

1. Network topology and state, as maintained statically and dynamically

2. Cloud infrastructure allocated to IoT instances and initial scaling of the gateway to cloud processing ratio in terms of resources, network planning, availability

3. Gateway design, build  and selection based on (i) northbound interface standardization, (ii) multi protocol conversion, (iii)filtering and aggregation functions as well as  (iv)  edge network distribution

4. Control and data layer  implemented on the cloud, preferably with dynamic resource allocation per IoT instance grouping

5. Flexible and configurable streaming and data analysis, machine learning for sensor data optimization and feedback

6. Persistent data storage for analytics and operational / service management

7. Monitoring and management functions ( partly achieved by cloud management ) with an embedded FCAPS function within the gateway aggregating device management data, and via external active monitoring ie. probes, NMS.

Revenue Management

 

Finally we will touch on one of the most important aspects relating to IoE, namely revenue management.This summary is intended to provoke some discussion and is by no means definitive, we will go into details in separate, focused articles.

Recently we addressed a number of business opportunities within several sectors, finding that there was little or no comprehension of how to monetize the service delivery and usage stack across multiple stakeholders and even how to price up our services within their delivery ecosystem.

This applies especially to constructing applications which operate across  public & private clouds and multiple networks across edge, transport and access, where technical complexity across several layers of technology and multiple  stakeholders make the business proposition unclear.

While one can reasonably define platform & network utilization fees, it is more difficult to quantify, measure and price  services offered as part of the IoE application, as a package, offered to the various parties, and ultimately the customer, each with separate business and operating models.

Coming from a background in which we have worked in designing and applying revenue management across multiple heterogeneous ecosystems in the mobile and broadband  B2B, retail and wholesale area, we can identify the following key requirements towards making sense of an IoT revenue model.

The first thing to consider is that traditional centralized revenue management operational models (and systems ) will not easily scale to IoE ecosystems. This is primarily due to the fact that there are several legal and technical data ownership &  trust elements to consider in distributing critical information across multiple estates, owned and operated by separate parties.

The first step,  based on earlier data definition design criteria discussed above is to be able to structurally model static infrastructure, and mediation thereof , this is relatively easy to do given extensive capability for service and resource modelling ( See: ITU/TMF SID, Yang models and generic  data modelling for heterogeneous networks ). Note that an IoE network is not static and may be constantly changing in terms of the interrelationship between devices, gateways and the transport network. This is not far different to operation of a mobile network but, the IoE network has no standard unified operation,as does 4G for example, this is work in progress.

Once services and associated resources are well defined over an initial infrastructure configuration, it is possible to mediate and stream data over a number of the ecosystems ensuring that the supporting methods ( and protocols ) expose usage. There are a number of constraints in doing so namely the fact that IoE data is sessionless in the main, however volume and type of data is available, in primitive form.

When connecting into an IoE ecosystem as a user one may use  existing infrastructure or add infrastructure in the form of new devices. Therefore we can classify either a (i) provisioning request into the IoE revenue model, and a (ii) utilization request as two distinct initiators of the order-to-cash process, followed by potentially  usage measurement for particular devices, or associated processing as evidenced.

The association of device usage to individual users  will prove a challenge as there is currently no explicit method of managing subscriber references as is the case in telephony.

As an operator of the devices, and cloud ( public or private )  back-end there are various ways of monetizing bulk data aggregation and actuation, these functions can be modeled according to different application needs and depending on the value apportioned to the business nature of the application.

For example a safety critical IoT application may prevent losses to life and equipment, through reducing risks in detection and notification. These risks may be offset by the introduction of a fire detection and notification system, such that insurance premiums may be reduced, and conversely applied in the event of non compliance.

This premium rating model has been widely used in actuarial methods for some time as applied to industrial insurance and other areas,  with the advent of big data, risk avoidance can be built into risk signalling, and premium dispersion related  premiums based on operating data profiled over time.

See this article from the Australian actuarial association for further reference:  https://actuaries.asn.au/Library/Opinion/2016/BIGDATAGPWEB.pdf

Naturally access to operational information by third parties implies high levels of security, confidentiality and is open to misuse and abuse.

In summary we have the following business  requirements so far, namely:

  • Mediation of sessionless, heterogeneous information across partners
  • Interfaces to partner providers and operators
  • Understanding and offsetting legal and regulatory constraints in data ownership
  • Association of usage  profiles with usage for monetization and reward
  • Estimating and measuring Application value
  • Risk offset and reward monetization

The second main challenge is the ownership of information, aggregated as part of the IoE processing,  from which all sorts of meaningful information may be extracted, having value over and above that of the original data.

As an example, a typical household user profile for electrical and water usage, individually and over a municipal region,  may be related to the operational and financial optimization of the utilities. In exchange for setting up and providing such to an IoT provider it may be possible to reduce bill costs to a consumer as a reward.

The data they generate may be anonymized and sold on for research and marketing purposes, or massaged into analytic information. But, increasingly, and we would assert, rightfully so, ownership of that data driven information is called into question.

Can the rightful owner be reimbursed for revealing their usage patterns, and by whom, given that the IoT is not yet based on a subscriber model ?

Are intermediary brokers sufficiently secure to ensure that the information does not fall into wrong hands?

Is the utility  service provider entitled to generate revenue from the usage data collected for taxable resource utilization?

These are some of the questions to be answered over the next few years, noting that technology leads regulatory advances,  the landscape is constantly evolving making some issues redundant and bringing others into play.

Technologies such as blockchain are being touted as potential solutions to the trust aspect surrounding monetization , but it is as yet unclear that there is a silver bullet to address these issues discretely.

In all probability there will be a mix of things which come into play all of which require fundamental understanding of privacy law, security, regulation and standardization both local and international, over and above technical and organizational constraints.

 

 

 

 

 

 

 

 

 

 

Head in the “cloud” feet on the ground

Recently reviewing a   a number of standards, products, technologies with purpose of extending   management systems into the cloud based on criteria of  minimal development, simplicity, reuse, automation, and scalability.

There are several  distinct business models in existence, namely the internet service provider view, otherwise known as Over-the-Top (OTT) by the  communication service provider community ie. Telecom operator or CSP, and the emerging service ecosystem based on medium to  large scale data center capability and associated services.

The IETF has provided the backbone of internet reference standards, and these have been applied with great ingenuity by the major internet service providers, while they have focused on scaling their ecosystems according to large scale and grid computing principles, merging into fully fledged SaaS and infrastructure “cloud” based utility services.

The communication service providers realizing that revenue is systematically being eroded by the so called OTT’s have taken steps to reduce costs  and monetize their infrastructure while entering into partnerships within an admittedly limited regulatory framework.Certainly, the current network operating model is more about cost control than it is about monetization.

Given the costs of rolling out 5G capability in optical fiber and supporting back end systems, it is difficult to see how operators would provide ROI  for these new infrastructure services over time, while essentially functioning as a utility for third parties, and cutting internal costs. The more they cut internally the less capable they become in terms of innovation and service delivery.

So they have taken steps to address this through partnering with innovative vendors in cloudification, emulating internet service providers, associating with open-source communities such as the Linux foundation, and partnering with an extended B2B and MVNO ecosystem.

However they face the major challenge of the network.

Network computing (I/O control &  management)  is one of the most difficult areas for the cloud, but is also one of the greatest opportunities for CSP’s and other potential entrants.  Internet providers do not “do” network although they are certainly positioning for this requirement.

To this end standards bodies such as the Telemanagement forum, the Broadband forum & ETSI have been working to develop network interoperability standards such that the rollout and operational  management of services can be simplified, automated and reused in diverse ways.

However, this work is thwarted by several realities summarized as follows:

  • Time – things are moving fast and playing catch up places a CSP into reactive mode while technology is evolving – this means that solutions developed today may be thrown away tomorrow as obsolete, this aspect places risk on vendor investment as well as on operator technology adoption
  • Existing operational infrastructure implies migration and dual operating modes, which are high risk exercises, impacting quality of service and organizational coherence
  • Regulatory constraints – Make investments uncertain, as it is unclear which services may be offered in the near future and thus which business  priorities need to be developed
  • Complexity of emerging standards – The implementation of 5G with a focus on NFV powered by SDN capability is work in progress, while standards do address only part of the solution ( The network ) leaving  management and control components open to implementation via a complex overlay of off the shelf open-source and development paradigms ( OPNFV, OpenStack, OpenDaylight, ONOS and others )
  • The generalization of the  management layer has been addressed to some extend by the emerging ECOMP and other relevant open source initiatives, and yet there is complexity in integrating this solution to the underlying alternatives for NFV and SDN, as well as to existing operational ecosystems

It is this opinion that progress can be made through iterative efforts in incrementally and iteratively applying sound software engineering principles to generic open  platforms, while focusing on point solutions to discrete business cases.

Leveraging  one or more of the existing and  emerging standards is a definite, at least as a reference point in design, and a utilization of components where these exist.

But ultimately it is a fact that solutions will develop in keeping with market forces, and capabilities, not by standards alone, technology is moving faster than adoption.

Therefore it is key to understand the key design and implementation aspects  which can be leveraged to drive change and readiness in CSP’s capability.

The following items are listed in no particular order, however early analysis and business case definition will to a large extent guide the process :

  • Understanding the size and scale of the business case
  • Modelling the components throughout the system stack
  • Identifying key security and operational constraints
  • Applying Virtualization and open cloud adoption, where feasible
  • Development of  selected “cloud” capabilities for storage and compute nodes – this is feasible given availability of  statistical multiplexing software, but scaling down is more challenging than scaling up
  • Development of a minimal container ecosystem on which VNF’s can run
  • Decomposition of the network control and data transfer functions ( See modelling exercise )
  • Limited application of one or more of the existing NFV environments, integrated with   a virtualized compute and storage stack implemented with some form of containerization
  • Integration and migration of existing infrastructure to a cloudified environment

The exercise is largely one of decomposition, standards based design, open product selection, and pilot development and testing – there is no silver bullet at this stage.

The question who is best positioned to  execute this work given that existing vendors are understandably unwilling to cannibalize their existing licensing model in favor of building open software based ecosystems, while most operators are not set  up for large scale software development. They used to be, but that was a long time ago.

Equally, there is an open field for  service provider entrants & MVNO’s positioned to offer selected services,  based on the virtual/cloud   paradigm, provided that they are ready and able to  invest in  the set-up of the required   ecosystems.

Partnering with an integrator and/or  system vendor who is willing, and able to design and implement for “brownfield”,  integrating existing critical  operations, is key to this process and represents opportunities  for both parties.

Establishing an internal minimal architecture function is a necessity, as is keeping track of internal  road map and prioritization functions, applying impact and risk analysis to the supply chain.

In particular there are certain key differentiators which can be realized as revenue  once high quality of service, virtualized ( not necessarily cloud based ) infrastructure is in place, these need to be qualified in terms of legal and regulatory constraints, privacy and security.

In the next  chapter on this subject, we will continue with feedback on these potential revenue streams,  & constraints thereof.