Sunday, October 4, 2015

Operational Windows Enable are a basis for Operational Management Experience (IOC)

I have talked about the drive by many companies to reduce “Operational Variation” across plants, teams, and industries. This is core to the journey to "operation excellence" gaining consistency, awareness and early detection of situations. When you look at the table below on the levels of human operational automation, the drive of the integrated operational experience is trying to reach level 5 "Worker management by exception", a big part of this is awareness of current operational process status relative to optimum.   

There are many ways that operational control will be implemented, as pointed out it could be through actions and processes becoming embedded and certainly this will be the big driver in the road to “Operational Innovation”.

But another way is through displaying key operational indicators to a knowledge worker, where  these indicators are mapped within boundary conditions. These boundaries are set up based on the time and relevance to the role and activity/ task the knowledge worker is performing. This sort of “operating Window” enables the knowledge worker the context, and recommendations, knowledge to enable operational decisions in a timely and ever consistent way. This same operational window can be used over multiple sites/ and teams for that role and activity providing consistency control.
The figure below is an example of an operational window.

Where on the left you can see the operational running trend, but you can see the changing boundary conditions of operation based upon product etc., The green shows the area where optimum control, safety, and production performance is. The right hand side you have reasons, and number instances of deviations for periods out of operational control.

Placing this sort of operational window where the operational boundaries come from the business strategy but in context of the role and activity no matter where in the plant operational team, but the feedback and running side is real-time from the plant. This window is place adjacent to  current control/ HMI screens the operator is using, or the maintenance or process screens the users are working, and you start the transformation to a knowledge worker. 

Sunday, September 27, 2015

The Changing Landscape of Supervisory Systems from HMI, / Common Control Room, to “Distributed Multi Point Operational Landscapes” !!!!!

For the last couple of years we have seen the changing supervisory solutions emerging, that will require a rethink of the underlying systems, and how they implemented and the traditional HMI, Control architectures will not satisfy! Certainly in upstream Oil and Gas, Power, Mining, Water and Smart Cities we have seen a significant growth in the Integrated Operational Center (IOC) concept. Where multiple sites control comes back into one room, where planning and operations can collaborate in real-time. Initially companies just virtualize their existing systems back, and then they standardize the experience for operational alignment and effectiveness, and then they simulation, and model, not many have got to this last step.

But in the last couple of weeks I have sat in discussions where people talk about this central IOC, which is key. When you start peeling back the “day in the life of operations” the IOC is only the “quarterback” in a flexible operational team of different roles, contributing different levels of operational. Combined with dynamic operational landscape, where the operational span of control of operational assets, is dynamically changing all the time. The question is what does the system look like, do the traditional approaches apply?

When you look at the operational landscape below, you can see 100s of operational control points where humans will have to interact with the system, with different spans of control, and operational points will be manned and unmanned on regular basis.

Traditionally companies have used isolated (siloed) HMI, DCS workstation controls at the facilities, and then others at the regional operational centers and then others at the central IOC, and stitched them together. Now you add the dynamic nature of the business with changing assets, and now a mobile workforce we have addition operational stations that of the mobile (roaming worker). All must see the same state, with scope to their span of control, and accountability to control. 
Since the 1990’s, control system technology has enabled a flexible delivery of work, where workers can support both “normal” and “abnormal” situations from multiple locations, either in the same room or across the world.  This mechanism has to be reliable, easy to implement, and easy to maintain.  Some customers have applied this mechanism to more than 5 different “points of operation”, which range from equipment panels, mobile devices and local control rooms to regional and national operations centers.

The requirements have become the following:

  1.        “Transparency of Trusted Operational State”: with real-time operational actionable decisions becoming key, the ability to monitor, the system raise the situation automatically through operational, asset self-awareness. So there is transparency to whole operational landscape situational state.
  2.        “Point of operation”: the implementation must support a configuration where one of the multiple points of operation uniquely can operate, which includes responding to alarms.
  3.        Simultaneous “point of operation”: the implementation must also support a configuration where more than 1 worker can operate, which is rarely more than 2.
  4.        “Span of operation” flexibility: each “point of operation” can be an individual PID, start/stop or device, or it can be a broader “span” of operation.  This “span” must be assignable in a flexible manner, where the “span” can be adjusted to become narrower or broader.  Example conditions include night time or overhaul conditions for some operations.
  5.        Ownership visibility: each possible point of operation must have a simple and clearly visible indication that it doesn’t have ownership, and reinforced indication when it does have the ownership. Clear visibility across the operational landscape as who has point of control, and as a team accountability is understood to respond to the situation promptly.
  6.        Management of alarms: it is essential for safety, legal, environmental and health requirements that new alarms animate, suppress/shelve, annunciate and trigger display changes only at the point(s) of operation, and only the workers using the point(s) of operation can acknowledge or silence new alarms. This means all alarms from asset to process, operational, but scope of alarm responsibility is aligned with span of control, but as a team there are “no blind spots” and alarm, situational awareness is escalated based on responsiveness, and situation. Assumption of control, and someone doing something must be removed.
  7.        Manage of operational events across different points of operation;Example operators want to be able to set operational limits/events across different operations? How is this managed and governed?
  8.         IT/OT seamless integration;Operating, monitoring, trending, alarming and integration with other islands of information to enable the teams to make informed decisions.
  9.         Reliability, upgradable, cyber security, network architecture, cloud;
  10.      One problem cannot bring down the whole operations!!
  11.      Assignment of operation: an authorized worker must have an easy and reliable means to assign and adjust the spans of operation.  The following diagram shows examples of transferring the span of operation between a roving user, a local control room, and a remote operations center:

In the above diagram, Areas or Sites “A” through “D” require supervision by different users or by the same user in different locations.  This scenario also applies to multiple operations consoles or desks within the same room.  The span of operation varies with the operations situations.  The span of operation can overlap among multiple users and multiple locations.

We need one system, but multiple operational points, and layouts, awareness so the OPERATIONAL TEAM can operate in unison, enabling effective operational work. Below is a high-level diagram of the operational team by the situation, you will have multiple skills in each situation, people will move through the situational state, but the diagram shows the merging operational work characteristics.

This emerging dynamic multi-point operational landscape is big topic that I will explore over the next few weeks, as traditional thinking, traditional architectures, and traditional implementations will not enable the transformation in operational work needed to satisfy effective agile operations.

Sunday, September 20, 2015

Operational Innovation !! Technology is here but Culture Needs to Evolve!

“Operational Innovation” is a hidden objective of many companies, even if they do not know it, again this came home last week. Why is it hidden because it is not natural for innovation to driven from all parts of the business, companies have a habit of centralizing it? Actually we can all contribute to innovational improvement to the way to take on, execute our “jobs to be done” during the day.

Tuesday, September 8, 2015

Is Central Practical, or really Distributed Solution Leveraging Micro Data centers, provides a really the Practical Way to go Industry Solutions

In the last couple of weeks I have come across a number of very large opportunities (10 Million I/O) that challenge the traditional thinking of potentially “central” systems.

The key is these are distributed operational systems, in supervisory control, information systems, and Operational control (MES) systems. With distributed vale generating assets (plants) of different sites, but with the many sites orchestrated together into a working align value chain. But today's production requires flexibility and agility requiring “actionable decisions” to be taken at all levels of the Vale chain.

At the “edge” e.g. the field plants decisions must be able to made, and that means the information, and ability act is local, with a timely response. So while Central data centers with master data storage and applications is logical, it is only practical to have distributed systems.

Last week I was speaking at conference and one of my fellow speakers talk about “Micro Data Centers” as the next wave in industrial computing following the banks etc. This appealed to me as faced these large opportunities where they want 99.99% uptime, and responsive systems. While we have developing the software to addressed distributed “peer to Peer” systems, and solutions, the key is the associated hardware. 

So what is a Micro Data Center???A micro-datacenter is a smaller containerized datacenter system designed to solve different problems or to handle different workloads. A micro-datacenter (MDC) is a smaller, containerized (modular) datacenter system that is designed to solve different sets of problems or to take on different types of workload that cannot be handled by traditional facilities or even large modular datacenters.

Whereas an average container-based datacenter hosts dozens of servers and thousands of virtual machines (VMs) within a 40ft shipping container, a micro-datacenter includes fewer than 10 servers and less than 100 VMs in a single 19in box. Just like containerized datacenters, MDCs come with in-built security systems, cooling systems and flood and fire protection.

This blog is worth a read:
With some interesting comments:“Their size, versatility and plug-and-play features make them ideal for use in remote locations, for temporary deployments or even for use by businesses temporarily in locations that are in high-risk zones for floods or earthquakes. They could even serve as a mini-datacenter for storage and compute capacity on an oil tanker.We have already discussed that Micro data centers are breakthrough solutions, but we are still in the early adoption phase and the market potential is untold. Where are they already in use? How fast will the market ramp up?Actually, you may already be using micro data centers and don’t even know it. The demands of real-time (or near real-time) data processing needs in environments with factory automation (robots), industrial automation (cranes), and bidding or trading stocks and bonds, for example, call for the capabilities micro data centers provide.The sheer amount of data required in such industries like oil and gas drilling and exploration, construction and mining also require processing to be on site and so they do not go through latency increasing hubs.Other sites may not have as much big data, but micro data centers offer advantages through standardization fast deployment, ease of management and troubleshooting and security, not to mention cost effectiveness.But the use case on the horizon with the greatest potential is a massive distributed network of micro data enters to form a content distribution network. This processing on the edge will support the commercial Internet of Things (IoT), including the fast emerging category of wearable devices.   The processing of data could be reduced to mili-seconds here.
Forbes recently reported on the massive size of the IoT — expecting worldwide market solutions to reach a value of $7.1 trillion by 2020 and connected devices to double by then to 40+ billion. While micro data centers might be a niche today, they will become more ubiquitous as they will be needed to facilitate this unprecedented connectivity.”

So when looking at these distributed Industrial Architectures these “small Bunker” micro Centers have the real opportunity to provide a distributed hardware architecture with practical capability to support a distributed Industrial Application of tiered historians and application servers.

Sunday, August 30, 2015

Manufacturing Industry Leads Cloud Adoption

It was good to see a blog by Gary Mintchell revealing that the industry sector is leading the adoption of Cloud, yet so often I hear the words that “it will not happen in our company or industry for years!”.

Gary writes a great set of blogs always worth having a link to it.

Some Quotes from this blog:

global study that indicates cloud is moving into a second wave of adoption, with companies no longer focusing just on efficiency and reduced costs, but rather looking to cloud as a platform to fuel innovation, growth and disruption.
The study finds that 53 percent of companies expect cloud to drive increased revenue over the next two years. Unfortunately, this will be challenging for many companies as only 1 percent of organizations have optimized cloud strategies in place while 32 percent have no cloud strategy at all.

So often I have sales and people saying that cloud is driven by a change in cost model, but also in all my interviews with customers and strategic thinkers it has been a platform for addressing the “changing speed of change and flexibility needed today” that is driving it. The world is changing faster and faster, and the ability to deliver the RIGHTS:
  • Right Product
  • Right Price
  • Right Cost
  • Right Time
  • Right Location

Is key and this means rolling increased new numbers of products across a distribution of value assets (plants) that will produce smaller lots (production runs), at less cost.
Understanding NOW what the state of Inventory, Work in Progress, and equipment to get to market is key.

Also we seeing the “walls of a plant” expand, beyond the manufacturing plant to now treat the whole manufacturing, and distribution supply chain as part of manufacturing. So the traditional MES (manufacturing Execution System ) is expanding, to offer the ability model the plant to store as operations, where product must be tracked to compliance, and work items distributed to workers and assets in that distribution chain.

“In the study IDC identifies five levels of cloud maturity: ad hoc, opportunistic, repeatable, managed and optimized. The study found that organizations elevating cloud maturity from the ad hoc, the lowest level to optimized, the highest, results dramatic business benefits, including:
  •        revenue growth of 10.4 percent
  •         reduction of IT costs by 77 percent
  •         shrinking time to provision IT services and applications by 99 percent
  •         boosting IT department’s ability to meet SLAs by 72 percent
  •         doubling IT department’s ability to invest in new projects to drive innovation.”

Cloud Adoption by Industry
By industry, manufacturing has the largest percentage of companies in one of the top three adoption categories at 33 percent, followed by IT (30 percent), finance (29 percent), and healthcare (28 percent). The lowest adoption levels by industry were found to be government/education and professional services (at 22 percent each) and retail/wholesale (at 20 percent). By industry, professional services, technology, and transportation, communications, and utilities expected the greatest impact on key performance indicators (KPIs) across the board.”

The above learnings and results do not surprise me, based upon my own engagements in the field, and observing the increased realization that speed of change is important, and tradition large projects are going out the door. Replaced by rapid projects leveraging existing expertise in the industry and adding through own operational process value to differentiate.  

Sunday, August 23, 2015

Can Sustainable Manufacturing Operations Management Exist without some sort of Master Data Management?

Over the last couple of months we have seen customers increasing investigating the strategies to answer this question: “how to enable alignment across “level 3” operational applications”. 

This area of aligning the level 3 applications without rip and replace will become one of the core requirements in making Manufacturing Operations Management sustainable and effective.   

Syncing between systems, people look at data warehouses , they do manual binding, but these are just not practical in a sustainable and every changing world. There are many systems usually upwards of 20 + systems which come from different vendors and even if they do come from the same vendor they implemented by different cultures in the plants. The thought pattern on “just asset naming” is different between these groups.

Again Borrowing from As Gerhard Greeff – Divisional Manager at Bytes Systems Integration put it in his paper"When last did you revisit your MOM?"

MDM or Master Data Management is the tool used to relate data between different applications.
So what is master data and why should we care? According to Wikipedia, “Master Data Management (MDM) comprises a set of processes and tools that consistently defines and manages the non-transactional data entities of an organization (which may include reference data). MDM has the objective of providing processes for collecting, aggregating, matching, consolidating, assuring quality, persisting and distributing such data throughout an organization to ensure consistency and control in the ongoing maintenance and application use of this information.”

Processes commonly seen in MDM solutions include source identification, data collection, data transformation, normalization, rule administration, error detection and correction, data consolidation, data storage, data distribution, and data governance.

Why is it necessary to differentiate between enterprise MDM and Manufacturing MDM (mMDM)? According to MESA, in the vast majority of cases, the engineering bill-of-materials (BOM), the routing, or the general recipe from your ERP or formulation/PLM systems simply lack the level of detail necessary to:

1. Run detailed routing through shared shop resources
2. Set up the processing logic your batch systems execute
3. Scale batch sizes to match local equipment assets
4. Set up detailed machine settings

This problem is compounded by heterogeneous legacy systems, mistrust/disbelief in controlled MOM systems, data ownership issues, and data inconsistency. The absence of strong, common data architecture promotes ungoverned data definition proliferation, point-to-point integration and cost effective data management strategies. Within the manufacturing environment, all this translates into many types of waste and added cost.

The master data required to execute production processes is highly dependent upon individual assets and site-specific considerations, all of which are subject to change at a much higher frequency than typical enterprise processes like order-entry or payables processing. As a result, manufacturing master data will be a blend of data that is not related specifically to site level details (such as a customer ID or high-level product specifications shared between enterprise order-entry systems and the plant) and site-specific or “local” details such as equipment operating characteristics (which may vary by local humidity, temperature, and drive speed) or even local raw material characteristics.

This natural division between enterprise master data and “local” or manufacturing master data suggests specific architectural approaches to manufacturing master data management (mMDM) which borrow heavily from Enterprise MDM models, but which are tuned to the specific requirements of the manufacturing environment.

Think of a company that has acquired various manufacturing entities over time. They have consolidated their Enterprise systems, but at site level, things are different. Different sites may call the same raw material different things (for instance 11% HCl, Hydrochloric acid, Pool Acid, Hydrochloric 11% etc). Then this same raw material may also have different names in the Batch system, the SCADA, the LIMS, the Stores system, the Scheduling system and the MOM. This makes it extremely difficult to report for instance on the consumption of Hydrochloric Acid from a COO perspective, as without a mMDM for instance, the consumption query will have to be tailored for each site and system in order to abstract the quantities for use.

The alternative of course is to initiate a naming standardization exercise that can take years to complete as changes will be required on most level 2 and 3 systems. That is not even taking into account the redevelopment of visualization and the retraining of operators. The question is, once the naming standardization is complete, who owns the master naming convention and who ensures that plants don’t once again diverge over time as new products and materials are added?

The example above is a very simple one, for a raw material, but it can also be applied to other area.
But when you are talking to customers you see comments and projects and so many are trying to deal with this issue without really looking at the big problem and plan.

If a company has for instance implemented a barcode scanning solution, the item numbers for a specific product or component may differ between suppliers. How will the system know what product/component has been received or issued to the plant without some translation taking place somewhere? mMDM will thus resolve a lot of issues that manufacturing companies are experiencing today in their strive for more flexible integration between level 3 and level 4 systems.

The objective of the proposed split in architecture is to increase application flexibility without reducing the effectiveness and efficiency of the integration between systems. It also abstracts the interface mechanisms out of the application into services that can operate regardless of application changes. This will get rid of numerous “point-to-point” interfaces and make systems more flexible in order to adapt to changing conditions. The mSOA architecture also abstracts business processes and their orchestration from the individual applications into an operations business process management layer.  Now, one person is able to interact with multiple applications to track or manage a production order without even realizing that he/she is jumping between applications.

Even with mSOA and mMDM, integration will not be efficient and effective unless message structures and data exchange are in a standard format. This is where ISA-95 once again plays a big part in ensuring interface effectiveness and consistency. Without standardized data exchange structures and schemas, not even mMDM and mSOAm will enable interface re-use.

ISA-95 part 5 provides standards for information exchange as well as standardized data structures and XML message schemas based on the Business-to-Manufacturing Mark-up Language (B2MML) developed by WBF, including the verbs and nouns for data exchange. Standardizing these throughout the manufacturing operations ensures that standard services are developed to accommodate multiple applications. Increasingly we are seeing the process industries such as Oil and Gas, Mining, looking towards these standards, and developing them to address this growing challenge of expansion, vs sustainability.

Saturday, August 15, 2015

Virtual Expert Teams, Provide One answer to “Time to Performance”

We often hear of the aging workforce as a big problem, and certainly it is due to fact that it is not a evolutionary transition to next generation of workers it is leap to new generation of workers missing at least one generation. So from "baby boomers" to Gen Y or Millennials.

As has been stated many times comes with this a change in "Experience levels" of the job, and site, but correspondingly comes a change in the way they native work, engage with others sharing more, asking more, providing the opportunity to bridge this experience transition.

Last week in some discussions around Integrated Operational Centers (IOC) it was clear that IOC is not about bringing what happens field control rooms into a central location, the real transformation happens when experience and operational work transformation happens. Shifting to an operational experience where:
  • Experience can be shared across sites, and workers through standard operational interfaces and experiences
  • Shift to Exception based experience from monitoring, where the user interacts with the system only when required.
  • Where planners, operational control, and subject matter experts can align, and collaborate in real-time, sharing the same view on a situation.

But it was clear that the companies making the big step are going further, and really introducing the "Flexible Operational Team " concept, shifting from an operator to a true operational team. From site workers who now are agents (eyes) to the rest of the team on site, to central control working closely with real-time planning and work order execution, to experts in maintenance, process, safety, and management providing the real-time knowledge and experience across multiple sites, and multiple situations. The diagram below shows this "flexible operational team" and the associated transformation in operational work across the team, due to new work ethic of sharing and asking.

But what we seeing in the market is some innovative approaches to solving this existing experiencing and the transfer, and it is through the use of “Virtual Experts Teams”. So what is this concept?

Key is to have these highly valued knowledge experts which could be across different aspects of the business e.g. asset management, process, planning, optimization, quality being empowered to work across the many enterprise production assets/plants. Today many of these experts are restricted in their contribution to their local plants, but a number of companies have started strategies which say we must leverage these people. These experts must be able to access the state, information of the plants in a consistent manner even though they may have never been there. They must also have the natural ability to collaborate with the local teams.

This means a local team is able to call upon the best experts relative to the situation they are dealing with. The expert is able to go online and access the plant situation in “near real-time” so they can see the situation while collaborating with the site team. They are able to drill in and do analysis, so as to draw upon experience and provide advice in real-time to the local team.

Now is this easy, I would say no, as just accessing the plant data is not effective, as the expert maybe over the other side of the world, never been to the plant and so the data will be in different measures/ context to what he expects. So in order to achieve this virtual team we need to have a “trusted information” system, where the data / information is a consistent context.

But the above concept is real, and is valid with leading companies currently implementing these "virtual communities" with subject matter experts on call across sites and situations. The operational team experience is not a "rip and replace" it is built on the existing automation/ supervisory systems installed in 90s, and 2000s, but now aligned with a validated model, and context. Plus a collaboration user experience where systems notify of "abnormal situations" and the controllers will share and collaborate in real-time, and virtual experts can access their view of the situation in real-time. Note the view of the expert may be different angle on the data relative to their expertise, 

But the system is NOT a one way information system, it is bidirectional, interactive experience with accountability, of action and role, with a built in ability to evolve the knowledge and experience of the system for the future.

Too often I am seeing slices of this Operational Team landscape, being implemented without realizing the whole picture to realize the paradigm shift in value.