jump to navigation

Closed Loop Operations Management June 3, 2008

Posted by Brian Sohmers in bpm, Business, collaboration, enterprise 2.0.
Tags: , , , , , , , ,
1 comment so far

Aberdeen Group recently released a white paper called “Technology Strategies for Closed Loop Inventory Management“. This paper explains how inventory has and will continue to be the lifeblood of supply-chains and as such, needs to be properly managed.

Inventory drives revenue and efficiency for companies by reducing capital (with few inventories in stock) and simultaneously increasing customer service levels. – Aberdeen Group.

Those companies following the principle of closed loop inventory management can:

  1. Determine safety stock targets
  2. Replenish inventory into distribution buffers
  3. View end-to-end inventory
  4. Respond quickly to market events
  5. Segment inventory based on customer service requirements

Closed Loop Operations Management

Decades ago, closed loop quality management was in vogue and most companies today have achieved this through total quality management programs. It’s certainly refreshing to see closed loop inventory management being discussed which is a must for a high-tech company to compete in today’s global market place, but the focus on just quality and inventory falls short. What sets best in class companies apart from the competition is closed loop operations management. Closed loop operations management encompasses inventory, quality, production, accounting, and other value added activities that help bring products to market.

Closed Loop Systems

Before exploring the attributes and benefits of closed loop operations management, let’s quickly review what a closed loop system is. In an open-loop system, there is no feedback. Inputs are calculated based on desired outcome only. An example of open loop management in high-tech operations is setting inventory levels, production schedules, and supply chain plans based on just sales forecasts and orders. A closed-loop system, on the other hand, is one that is controlled based on both desired outcomes and feedback from the system. Applying this principle to the former example, would mean that inventory levels, production schedules, and supply chain plans are determined not just by sales forecast and orders, but also feedback from ongoing operations. 

Having a closed loop system provides the following advantages over an open loop system:

  1. Disturbance adjustment (such as actual yields and cycle-time)
  2. Guaranteed performance even with model uncertainties (No supply-chain planning model matches the real supply-chain perfectly)
  3. Reduced sensitivity to communication errors (developing the plan is one thing, but errors can develop when communicating it, especially to trading partners)
  4. Improved reference tracking to plan

Visibility

So the different between open and closed loop operations management is the use of feedback from ongoing operations. One of the problems operations folks face today is they spend much time, effort, and money on planning to optimize operations by minimizing inventory, decreasing cycle time and lead time, and improving on-time delivery, but they lack visibility and execution ability to achieve their targets. As a result, buffers (inventories, freeze periods, longer lead times) are routinely accepted to insulate plans from disruptions, leaving decisions to be local in time and function, often based on historic performance at best.

To close the loop between planning and execution companies need to establish feedback through visibility to monitor for issues and trading partner compliance to instructions,trade laws, and environmental laws, systematic root-cause analysis, and decision supportfor proactive and concerted responses. This ensures plans actually happen rather than just replanning to adjust to reality. This is an ongoing process where visibility is established to continuously monitor and enable management by exception. Once alerted to an issue, the impact is assessed, a root-cause analysis is performed to help narrow down possible corrective actions, and different “what-if” scenarios are modeled to plan a response. The closed loop operations management process also leads to continuous learning and process improvements. It’s not enough to plan on having a lean supply chain with short lead times, you need to achieve it through closed loop operations management.

Real-Time Visibility

Gaining visibility into the complete picture is essential for closed loop operations management, but this is easier said then done. There are a variety of reasons that make it difficult to achieve high quality visibility. Before we discuss the challenges, let’s take a look at how to measure the quality of visibility in a closed loop system. There are two main factors that determine quality:

  1. Resolution – how much feedback or visibility a companies has into operations including outsourced manufacturing
  2. Latency – how quickly or real-time this information is gathered and available for KPI tracking and exception management

The drive for high quality visibility with high resolution and low latency feedback is simple. If you don’t have a complete picture of your current operations or the picture you have isn’t current, the feedback in your closed loop system becomes less useful decreasing your control over operations. An example to illustrate this in high-tech operations is a sudden increase in cycle time for a particular outsourced manufacturing process. If your organization doesn’t have visibility to this or doesn’t become aware of this increase for days or weeks, your ability to make the appropriate adjustments to reduce its affect on downstream operations and ultimately to the end customers is significantly diminished. With real-time visibility, the problem is acknowledged right away enabling an immediate response, minimizing the impact downstream. It’s not enough to have visibility, you need to have complete, real-time visibility into your operations including outsourced manufacturing.

The bar for visibility into operations has been raised in today’s demanding business environment. Two main market drivers responsible for raising the bar are the increase in the speed of business and globalization. Faster product lifecycles and increased customer fulfillment demands are changing the speed at which business is conducted. This in turn, increases the cost of any latency in responding to demand or supply-chain changes. Responding to a quickly changing environment requires real-time information. Secondly, the move to globally dispersed business models, where over 50% of the information needed to efficiently run operations resides outside your four walls, makes it increasingly challenging to achieve high resolution visibility. Hi-tech companies can no longer afford to take a hit on business agility when they outsource their manufacturing. All participants in the value chain need complete visibility for faster, better decisions.

Intelligent Operations Management

To address these needs and close the loop in operations requires a new category of solutions. Intelligent Operations Management has been recognized by leading analysts as a new and unique category of enterprise software that aligns business objectives with operational systems. Using real-time information from all parts of your extended enterprise, IOM completes existing infrastructure allowing better access to information, more effective collaboration between business units, and better operational decision-making. By closing the loop with Intelligent Operations Management, companies are able to control costs, improve service levels, decrease lead times, and address pricing pressures. Those companies who follow the principles of closed loop operations management enabled through an Intelligent Operations Management solution are best equipped to compete in today’s global business environment.

Industry Recognition Received for Intelligent Operations Management February 22, 2008

Posted by Jeff in Business, collaboration, enterprise 2.0.
Tags: , , , , , , , ,
add a comment

We’ve reported on our conversations on the concepts, derivation, and terminology regarding Intelligent Operations Management in several earlier postings, including “Collaborative Decision Environments“, “Notes on Enterprise Software“, “Shorter Time to Volume is the New Goal“, and “Defining the Category: Intelligent Operations Management“.

In addition to the discussions with Bob Parker of Manufacturing Insights mentioned in many of the above postings, Serus has also made presentations to Gartner Group, AMR, Aberdeen, ChainLink, Ventana, and CIMData.

One of the first external validations of Serus’ concept of Intelligent Operations Management has now been written and released by Manufacturing Insights.  Their recent white paper, describes all layers and concepts of IOM, and provides examples of its use.

The term Intelligent Operations Management can be broken down its three terms:

  • “Operations” defines the scope of our solutions.  We address all parts of operations, including forecasting, planning, work in progress tracking, basically all the way from the manufacturing product specification (including ECO’s) and order placement through the fulfillment activities, WIP at your outsourced organizations, and finally interface with your financial systems for invoicing and reconciliation.  Not surprisingly, most of our sales are to the VP of Operations, though we have gained traction recently with the VP’s of Finance, and are starting to gain traction with the sales functions in our customers, these being on each end of manufacturing in the full lifecycle.
    We use this word because there are few systems that directly address the challenges within Operations.  Hence we provide unique tools that are based on real-world experience in operations, rather than trying to coble something together with a spreadsheet.
  • “Management” defines what we enable our customers to do within that scope.  Our definition of management means first solving visibility challenges so that you understand the situation, in terms of inventory levels, backlog, etc., but most importantly allowing control of the situation, by making decisions regarding actions, orders, and instructions that are fed back to the organizations in the supply chain.
    This concept of management is most clearly thought of in terms of “feedback loops” which are a combination of visibility and control with the idea that corrective action is being taken, and new insights or information is being gained each time through the loop.
    Our product has dashboards that allow you to access information at the visibility, or raw content level.  A related concept is that the content has come from multiple sources, and hence has had to be collected and cleansed, meaning resolving errors or inconsistencies, such as inconsistent naming and organization.
  • “Intelligent” defines the next level above Management, which adds the ability to define business goals and operational constraints within with the system’s operation.  A typical example of a goal is to “keep service levels above 98%”, or “Reduce stockouts to no more than 5%”, or “reduce inventory”, etc.  Adding goals into the decision making allows the system to suggest solutions or possible decisions that are biased toward the goal, again enabling the feedback loops to run more efficiently.  Another goal may be to carry out or advance a business process, as business processes act as the foundation for many feedback loops.
    Examples of operational goals within which decisions are carried out occur in often in finance, where for example a trading organization makes trading decisions but within a corporate goal of maximizing profit, and other goals that limit credit risk or cash utilization.

A classic example of a goal driven planning system is a supply chain planner, which has an engine that generates plans given a set of operational goals and thresholds.  However, many supply chain engines run overnight, and the plans that they produce are often out of date by 10 or 11AM the next morning, due to some unexpected change.  For this reason. Serus focuses on rapid tactical replanning, such as handling a supplier outage through a local replan, with a goal being to assure that this outage and plan does not affect other production activities.

Two important terms that don’t appear in the IOM term per se, are “collaboration” and “ecosystem”.

“Collaboration” defines the nature of the interactions between the different members of the supply chain.  It holds that no decision can be made in isolation, but instead have a context or state that defines them.  Typical examples are one of agreeing on an inventory stocking level by first communicating several different what-ifs or scenarios.  Or proposing different dates or product specifications between one organization and other, until agreement is reached.  Information can be changed, shared, retracted, committed, etc.

“Ecosystem” defines the set of participants within the collaboration, which can be all members from suppliers to customers, but also extends the definition of the information used in the decision process to include content from public and private sources, as well as content that is feedback, evaluations, and lessons learned within the ecosystem, in the same way that eBay or Amazon provide content about the members, books, and products that is created within the site augmenting the public information such as book descriptions.  In our case, ecosystems provide benefit when performing collaborative actions and decisions toward meeting a goal.

Content Collaboration Update February 15, 2008

Posted by Jeff in collaboration, enterprise 2.0.
Tags: , ,
add a comment

 The February 2008 issue of IEEE Computer has two important articles that follow up on the topic of using wikis and social networking software for collaboration.  Since our previous posting on Content Collaboration Software was one of the most popular postings of 2007, I’m bringing these articles to your attention.

The article “Wikis: ‘From Each According to His own Knowledge’” by Dan O’Leary of USC describes the history of wikis and typical usage today.  The first wiki was implemented in 1994 by Ward Cunningham.  Wikis offer the following advantages:

  • Structure
  • Consensus
  • Collective Wisdom
  • User Engagement
  • Accuracy
  • Delegation of Control
  • User Management

They have the following limitations:

  • Lack of authority
  • No referees
  • “Too many cooks in the kitchen”
  • Bias
  • Information insecurity
  • Scope creep
  • Decreased contributions – “slow death”
  • Legal problems with content
  • Vandalism

There are a number of potential applications of AI in the area of wikis.

The article “Social Networking” by Alfred C. Weaver and Benjamin B. Morrison of the University of Virginia describes how the mass adoption of social-networking websites points to an evolution in human social interaction.  It has discussion of MySpace, Facebook, Wikipedia and YouTube.

We Attended salesforce.com’s ‘Tour de Force’ Event January 18, 2008

Posted by Jeff in Event Reporting, Technology.
Tags: , ,
add a comment

The first event of a multi-city roadshow profiling salesforce.com’s platform, called Force.com, was held yesterday in San Francisco.  We saw presentations on their latest platform capability, called VisualForce.  This is a tag-based toolset for building custom user interfaces.  In total, Force.com includes the following components:

  • The runtime environment, which forms the basis of all operations, and persistence.  The runtime environment includes standard presentation facilities, standard navigation, on-demand load balancing, account management, and handling of all database-related operations.
  • The building tools, which are all visual, drag and drop tools for creating applications.  These allow the definition of custom objects, custom layouts, custom controls, etc.
  • The scripting/language environment, which is called Apex code.  This is a language similar to PL/SQL, Java or PHP, within which application logic can be written.
  • The presentation environment, which is called VisualForce.  This is defined as a set of tag-based extensions to HTML, in the same way that JSP tags provide presentation tools.
  • The AppExchange, which is Salesforce.com’s facility to package, present, and provide metered access to completed applications which are built with the above four components.

Not surprisingly, the concepts underlying Force.com are very content, or database, oriented.  The API’s have a database-like feel, with operations such as “query”, “update”, and “describe”.  The query language is SQL-based. 

For another description, see the article on the event by Phil Wainewright.  Force.com is one of a rapidly expanding number of cloud computing platform choices available to ISVs today.

It appears that platforms are going to be the important territory to control in the enteprise space during 2008.  This is the latest level of abstraction: we have moved from browser wars some ten years ago, to programming model battles (J2EE vs .NET) during the early part of this decade, to the platform struggles during the latter part of the decade.  There are clearly important platform plays coming from SAP and Oracle.

Introduction to Semantic Information Processing January 11, 2008

Posted by Jeff in enterprise 2.0, Technology.
Tags: , , ,
add a comment

One of the major research areas that is now appearing in the IT industry is called “semantic information processing”, or the building and using of semantic information repositories.  A semantic information repository is a data collection that links concepts and names together.  We have probably all seen the need for this when performing Internet searches.  Suppose that we search for “chip”: this could mean a search for semiconductor chip, potato chip, or a person named Chip.  From the context, we can determine the meaning.

For instance, if I wrote the search:  “tell me about the nutrition value in chips”, I am probably talking about potato chips, since they are the only kind of chip that is food.

If I wrote the search “collect sales of the Intel T7200 processor chip”, the words “Intel” and “processor” would mean that I am talking about a computer chip or a semiconductor chip.

The intent is to enhance the usability and usefulness of the Web and its interconnected resources.

Historical Context

Early efforts in semantics were the knowledge representation and machine understanding efforts of the AI field in the late 60’s to late 80’s.  During this period, many researchers focused on how to represent knowledge such as scenes in stories, where character Jack is sitting in a house, Jack is married to Jill, Jack has a job of fetching water, water is located at the well, the well is located on a hill, and the hill is behind the house.  Each of these phases connects two nouns with a relationship.

Such knowledge context would then be used to guide the understanding of text or speech.  While many interesting presentations and demonstrations were given, the effort died with the “AI bubble” of the mid-late 1980’s.

Around the late 1990’s, some new efforts to organize semantic knowledge appeared based on the use of descriptive technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL), and the data-centric, customizable Extensible Markup Language (XML).  All take advantage of the hyperlinks already present in web-based context.

There was a very important article published in 2001 on “The Semantic Web” by authors including Tim Berners-Lee.

Recent Products

With the web plus today’s faster computers, the semantic information concept has been brought back.

One of the most interesting products in this area is called “Twine”, by Radar Networks. According to their description, Twine is using extremely advanced machine learning and natural-language processing algorithms that give it capabilities beyond anything that relies on manual tagging. The tool uses a combination of natural-language algorithms to automatically extract key concepts from collections of text, essentially automatically tagging them. These algorithms adroitly handle ambiguous sets of words, determining, for example, whether J.P. Morgan is a person or a company, depending on the context. And Twine can find the subject of a text even if a keyword is never mentioned, he says, by using statistical machine learning to compare the text with data sources such as Wikipedia.

See also Powerset’s presentation at IWSC.

Data Sources for a Semantic Processing Application

One of the data sources used is WordNet, which is populated with 137,543 word matching pairs.

These applications require, in part or whole, data that is available for sharing either within or across an enterprise. Represented in RDF, this data can be generated from a standard database, mined from existing Web sources, or produced as markup of document content.

Machine-readable vocabularies for describing these data sets or documents are likewise required. The core of many Semantic Web applications is an ontology, a machine-readable domain description, defined in RDFS or OWL. These vocabularies can range from a simple “thesaurus of terms” to an elaborate expression of the complex relationships among the terms or rule sets for recognizing patterns within the data.

The advent of RDF query languages have made it possible to create three-tiered Semantic Web applications similar to standard Web applications.  These applications have queries being issues from the middle tier to the semantic repositories on the back tier.

However, there is a three-way challenge that is holding up the implementation of semantic web systems:

  • motivating companies or governments to release data
  • motivating ontology designers to build and share domain descriptions
  • motivating Web application developers to explore Semantic-Web-based applications

Web 3.0

We have all heard of Web 2.0, so what would Web 3.0 be?

Some of the best forecasts that I have seen match the above discussion.  A Web 2.0 application is a Web 2.0 application that has knowledge and “thinks”.

Lately the concept of semantic information processing has been appearing in the current IT world.  In one of Bob Parker’s presentations, he describes the role of a “Semantic Information Repository” as:

“essential to improving decision making will be the ability to organize all types of information.  At the heart of the repository for large organizations will be an operational data store that can organize large volumes of transactional data into hierarchical, analytic friendly forms.  The data store should be augmented by effective master data management that can provide a harmonized view of key subject matters like suppliers, products, assets, customers, and employees in the context of the value chain being monitored.  The ability to bring some structure to unstructured content like documents completes the repository”

Some of the places where we are seeing semantic resolution of information being important:

  • Data cleansing – find product name equivalences
  • Business process management – it combines easer to write a business process if all of the terms and references have been reduced to standards by an semantic pre-processing layer in the runtime engine.
  • Business intelligence – it becomes easier to generate intelligence and conclusions if the corresponding data sets and events have been standardized through a semantic processing step.

End-of-Year Summary of 2007 December 28, 2007

Posted by Jeff in Event Reporting.
Tags: , , , ,
add a comment

This has been a great year for Serus and for the members of our community.  I haven’t discussed as much about Serus in this blog, since you can find that at the Serus web site.  Suffice to say that we grew by over 100%, and won a Deloitte award for 7th fastest growing company in Silicon Valley.

During this year, our visibility increased greatly, in media, with analysts, with organizations such as FSA (now GSA), and through this blog.  We recorded over 5,000 blog hits between the end of August and now (prior to that, our blogging system didn’t keep statistics).  Our most popular postings were:

A few observations about the trends during the year:  while the “enterprise 2.0” term was very hot during the first half, I haven’t heard it mentioned as much recently.  Instead, we see an upcoming term being “collaborative decision environment“.

During 2008, we will be continuing to publish, and are expecting to get even more hits, because the blog will be tied to our new corporate web site.

See you in 2008!

Supply Chain Planning Algorithms December 12, 2007

Posted by Jeff in Technology.
Tags: , , , , ,
1 comment so far

Introduction

Supply chain planning is a critical task in operations management.  On the one hand, it algorithmically solves immediate problems of finding availability schedules, sourcing decisions and resource allocations to produce a plan that meets goals in an effective manner. But it also provides insight into the trade-offs of different factors, constraints and rules.

However, supply chain planning means different things under different situations.  In this post, we try to classify types of supply chain planning problems, address the modeling issues, and provide background regarding the algorithms used. (more…)

Managed Services for Semiconductor Manufacturers December 3, 2007

Posted by Mike Lazich in Business.
Tags: , , , ,
add a comment

The Serus Managed Operations Services program provides outsourcing services for managing enterprise solutions and processes involved in semiconductor manufacturing operations.  This document describes the services that Serus can provide to manage such solutions. (more…)

Interesting Post on Business Optimization December 1, 2007

Posted by Jeff in Business, Technology.
Tags: ,
add a comment

Today I saw the following interesting post by Timo Elliott on the transition from business automation to business optimization.  He made the following point:

The last decade has been about automating business processes. The next decade will be about building business-centric applications. For the first time, organizations will have the opportunity to apply a systems approach to best-practice use of information across the organization as a whole, by synchronizing the two key components of corporate performance improvement: operational excellence and strategic change.

The posting goes on to discuss the combinations of ERP and BI companies that have occurred recently.

Online Communities and the Business Ecosystem November 26, 2007

Posted by Jeff in Business, collaboration, enterprise 2.0.
Tags: ,
1 comment so far

Introduction

After writing a number of postings on technology and business processes, we noticed that the readership stats for collaborative technology and collaborative business processes are now about equal.  To us, this provides a confirmation that it is time to focus on the business ecosystem that is created by using these two concepts together.  We discussed this a few times back in Spring 2007 in our posting on “Enterprise 2.0“.

What is a business ecosystem?  The following definition comes from Ray Wang of Forrester:

These ecosystems increasingly specialized and rely on the intellectual property (IP) innovation networks of Partners, Suppliers, Financiers, Inventors, Transformers and Brokers.  As software vendors and systems integrators expand into new markets, they will form solutions-centric ecosystems to enable exclusive, complementary, and “co-opetive” relationships.

(more…)