jump to navigation

We Attended salesforce.com’s ‘Tour de Force’ Event January 18, 2008

Posted by Jeff in Event Reporting, Technology.
Tags: , ,
add a comment

The first event of a multi-city roadshow profiling salesforce.com’s platform, called Force.com, was held yesterday in San Francisco.  We saw presentations on their latest platform capability, called VisualForce.  This is a tag-based toolset for building custom user interfaces.  In total, Force.com includes the following components:

  • The runtime environment, which forms the basis of all operations, and persistence.  The runtime environment includes standard presentation facilities, standard navigation, on-demand load balancing, account management, and handling of all database-related operations.
  • The building tools, which are all visual, drag and drop tools for creating applications.  These allow the definition of custom objects, custom layouts, custom controls, etc.
  • The scripting/language environment, which is called Apex code.  This is a language similar to PL/SQL, Java or PHP, within which application logic can be written.
  • The presentation environment, which is called VisualForce.  This is defined as a set of tag-based extensions to HTML, in the same way that JSP tags provide presentation tools.
  • The AppExchange, which is Salesforce.com’s facility to package, present, and provide metered access to completed applications which are built with the above four components.

Not surprisingly, the concepts underlying Force.com are very content, or database, oriented.  The API’s have a database-like feel, with operations such as “query”, “update”, and “describe”.  The query language is SQL-based. 

For another description, see the article on the event by Phil Wainewright.  Force.com is one of a rapidly expanding number of cloud computing platform choices available to ISVs today.

It appears that platforms are going to be the important territory to control in the enteprise space during 2008.  This is the latest level of abstraction: we have moved from browser wars some ten years ago, to programming model battles (J2EE vs .NET) during the early part of this decade, to the platform struggles during the latter part of the decade.  There are clearly important platform plays coming from SAP and Oracle.

Advertisements

Introduction to Semantic Information Processing January 11, 2008

Posted by Jeff in enterprise 2.0, Technology.
Tags: , , ,
add a comment

One of the major research areas that is now appearing in the IT industry is called “semantic information processing”, or the building and using of semantic information repositories.  A semantic information repository is a data collection that links concepts and names together.  We have probably all seen the need for this when performing Internet searches.  Suppose that we search for “chip”: this could mean a search for semiconductor chip, potato chip, or a person named Chip.  From the context, we can determine the meaning.

For instance, if I wrote the search:  “tell me about the nutrition value in chips”, I am probably talking about potato chips, since they are the only kind of chip that is food.

If I wrote the search “collect sales of the Intel T7200 processor chip”, the words “Intel” and “processor” would mean that I am talking about a computer chip or a semiconductor chip.

The intent is to enhance the usability and usefulness of the Web and its interconnected resources.

Historical Context

Early efforts in semantics were the knowledge representation and machine understanding efforts of the AI field in the late 60’s to late 80’s.  During this period, many researchers focused on how to represent knowledge such as scenes in stories, where character Jack is sitting in a house, Jack is married to Jill, Jack has a job of fetching water, water is located at the well, the well is located on a hill, and the hill is behind the house.  Each of these phases connects two nouns with a relationship.

Such knowledge context would then be used to guide the understanding of text or speech.  While many interesting presentations and demonstrations were given, the effort died with the “AI bubble” of the mid-late 1980’s.

Around the late 1990’s, some new efforts to organize semantic knowledge appeared based on the use of descriptive technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL), and the data-centric, customizable Extensible Markup Language (XML).  All take advantage of the hyperlinks already present in web-based context.

There was a very important article published in 2001 on “The Semantic Web” by authors including Tim Berners-Lee.

Recent Products

With the web plus today’s faster computers, the semantic information concept has been brought back.

One of the most interesting products in this area is called “Twine”, by Radar Networks. According to their description, Twine is using extremely advanced machine learning and natural-language processing algorithms that give it capabilities beyond anything that relies on manual tagging. The tool uses a combination of natural-language algorithms to automatically extract key concepts from collections of text, essentially automatically tagging them. These algorithms adroitly handle ambiguous sets of words, determining, for example, whether J.P. Morgan is a person or a company, depending on the context. And Twine can find the subject of a text even if a keyword is never mentioned, he says, by using statistical machine learning to compare the text with data sources such as Wikipedia.

See also Powerset’s presentation at IWSC.

Data Sources for a Semantic Processing Application

One of the data sources used is WordNet, which is populated with 137,543 word matching pairs.

These applications require, in part or whole, data that is available for sharing either within or across an enterprise. Represented in RDF, this data can be generated from a standard database, mined from existing Web sources, or produced as markup of document content.

Machine-readable vocabularies for describing these data sets or documents are likewise required. The core of many Semantic Web applications is an ontology, a machine-readable domain description, defined in RDFS or OWL. These vocabularies can range from a simple “thesaurus of terms” to an elaborate expression of the complex relationships among the terms or rule sets for recognizing patterns within the data.

The advent of RDF query languages have made it possible to create three-tiered Semantic Web applications similar to standard Web applications.  These applications have queries being issues from the middle tier to the semantic repositories on the back tier.

However, there is a three-way challenge that is holding up the implementation of semantic web systems:

  • motivating companies or governments to release data
  • motivating ontology designers to build and share domain descriptions
  • motivating Web application developers to explore Semantic-Web-based applications

Web 3.0

We have all heard of Web 2.0, so what would Web 3.0 be?

Some of the best forecasts that I have seen match the above discussion.  A Web 2.0 application is a Web 2.0 application that has knowledge and “thinks”.

Lately the concept of semantic information processing has been appearing in the current IT world.  In one of Bob Parker’s presentations, he describes the role of a “Semantic Information Repository” as:

“essential to improving decision making will be the ability to organize all types of information.  At the heart of the repository for large organizations will be an operational data store that can organize large volumes of transactional data into hierarchical, analytic friendly forms.  The data store should be augmented by effective master data management that can provide a harmonized view of key subject matters like suppliers, products, assets, customers, and employees in the context of the value chain being monitored.  The ability to bring some structure to unstructured content like documents completes the repository”

Some of the places where we are seeing semantic resolution of information being important:

  • Data cleansing – find product name equivalences
  • Business process management – it combines easer to write a business process if all of the terms and references have been reduced to standards by an semantic pre-processing layer in the runtime engine.
  • Business intelligence – it becomes easier to generate intelligence and conclusions if the corresponding data sets and events have been standardized through a semantic processing step.

Supply Chain Planning Algorithms December 12, 2007

Posted by Jeff in Technology.
Tags: , , , , ,
1 comment so far

Introduction

Supply chain planning is a critical task in operations management.  On the one hand, it algorithmically solves immediate problems of finding availability schedules, sourcing decisions and resource allocations to produce a plan that meets goals in an effective manner. But it also provides insight into the trade-offs of different factors, constraints and rules.

However, supply chain planning means different things under different situations.  In this post, we try to classify types of supply chain planning problems, address the modeling issues, and provide background regarding the algorithms used. (more…)

Interesting Post on Business Optimization December 1, 2007

Posted by Jeff in Business, Technology.
Tags: ,
add a comment

Today I saw the following interesting post by Timo Elliott on the transition from business automation to business optimization.  He made the following point:

The last decade has been about automating business processes. The next decade will be about building business-centric applications. For the first time, organizations will have the opportunity to apply a systems approach to best-practice use of information across the organization as a whole, by synchronizing the two key components of corporate performance improvement: operational excellence and strategic change.

The posting goes on to discuss the combinations of ERP and BI companies that have occurred recently.

Above and Beyond Software November 19, 2007

Posted by Shailesh Alawani in Business, Technology.
Tags: , , , ,
add a comment

Introduction

Most of the attention these days is paid to the end-user applications.  There are a huge number of discussions about Web 2.0 applications, social networking, and the latest trends in enterprise application architecture.

However, the visible part of an enterprise application is similar to the visible part of an iceberg:  as menacing as icebergs appear, most of the iceberg is underwater.  Enterprise applications are much the same, with their back-end services and databases being the bulk of the system.

At Serus, we have built industry-leading content integration engines, combined with a set of services offered to keep them operating in top condition, even as data requirements change.  We call these “managed services”, and we call the trend “Software plus Services”.  Think of these services as being “above and beyond” the application itself. (more…)

The Role of Master Data Management in Operations November 16, 2007

Posted by Mike Lazich in Business, Technology.
Tags: , , ,
1 comment so far

Introduction

Master Data Management (MDM) is a discipline for providing consistent content of your key reference data across different parts of your organization.  Examples include:

  • Standard customer data
  • Standard part data
  • Standard pricing data

MDM has emerged in the last several years as a separate enterprise software architecture category as the requirement for consistently defined and maintained enterprise data has become more apparent.  (more…)

Collaborative Decision Environments are an Upcoming Trend October 17, 2007

Posted by Jeff in Business, collaboration, Technology.
Tags: ,
add a comment

Last month, we mentioned a meeting with Bob Parker of Manufacturing Insights in which a number of trends in Enterprise Software were mentioned.  Bob recently published a perspective document in which he grouped a number of important concepts under the term “Collaborative Decision Environments”.  Many of the concepts were initially featured in an interview with Bob that appeared in Supply Demand Chain Exec Magazine earlier this year on predictions for 2007.

Examples that he gave included Teradata, SAS, Business Objects, and Oracle.  All of these firms are fielding products for collaboration and decision-making.  Within such a product, one can review and analyze data, and share aspects of the conclusions.

This was the first time that we have seen the concept applied directly in manufacturing.  Many of the prior examples were drawn from emergency situation handling, such as handling responses to natural disasters.

Here are some of the important characteristics of a CDE:

  • Provide shared access to a baseline content set.
  • Provide means to propose changes to the content.
  • Allow users to navigate through a sequence of changes, and commit or retract them within scenarios.
  • Define problems to be addressed, or goals to be accomplished and have them be used to set the context of the decision.
  • Provide semantic resolution of terms within disparate data sources.  In the manufacturing operations world, this would be a harmonized view of key content such as suppliers, products, assets, customers, and employees.
  • Provide analytic functions that allow evaluation of current or proposed data in terms of the problems or goals. This capability should include the ability to perform analytics that are retrospective (what happened), perspective (what is happening), or predictive (what will happen).
  • Provide a social network or ecosystem, within with participants can rank the relevance or ranking of comments, changes, and contributors, again in terms of the goals.  This enables the network or ecosystem to determine where expertise is located, and to expand to include additional experts or knowledge.

Not covered on this list are some of the communications technologies, such as having instant message or video collaboration.  We see those as being important as well, but they stem from infrastructure technologies outside of computational decision-making process that we are focused on.

It hasn’t appeared in the material from analysts yet, but it was clear from our discussion with Bob that Serus is also providing an example of a Collaborative Decision Environment.  This is mentioned in our posting on our Decision Support Infrastructure Architecture from earlier this month.  Over the next month, we will be clarifying more of the terms defined here and improving the alignment.

Notes on Enterprise Software Architecture – Part III October 12, 2007

Posted by Jeff in bpm, Business, collaboration, enterprise 2.0, Technology.
Tags:
add a comment

In our previous postings we looked at definitions, then at the structure of Enterprise Systems Architecture, and Enterprise Application Architecture.  In this posting, we go beyond the typical definitions, and look at some of the challenges that we have been addressing at Serus, and what their impact has been on our architecture.  We also discuss the definition and impact of Enterprise 2.0 technologies.

Serus is focused on the evolution of enterprise software architecture toward operations management.  This category of software deals with supporting the ongoing decision-making of schedulers, planners, manufacturing operations staff and managers, etc.  (more…)

Decision Support Information Architecture October 8, 2007

Posted by Jeff in collaboration, enterprise 2.0, Technology.
Tags:
add a comment

We view the Serus core information architecture as a “federated content hub for decision support”.  It has the following layers:

Decision Support Information Architecture

This model of content and operations on content appears ideally suited to solutions that are based on integrated information. It combines several aspects of reporting and business intelligence with the business process driven concepts.

There are several important layers:

Content Management: this focuses on the raw representation of fetched or received content.

Decision Management: this deals with deriving new information, or setting aside information or changes to values as part of scenarios.

Goal-Driven Analysis: this deals with creating search routines that will consider different options, toward improving an objective function score.  The objective function might be lowest cost, might be shortest delay, etc.

Presentation: this deals with the screen-building and screen navigation that allows the user to access information at each of the different levels below, from row data to particular scenarios.

Notes on Enterprise Software Architecture – Part II October 5, 2007

Posted by Jeff in bpm, collaboration, enterprise 2.0, Technology.
add a comment

In our previous posting, we reviewed some basic definitions, and divided the problem down into Enterprise Systems Architecture and Enterprise Application Architecture.  In this post, we deal with the latter.

Enterprise Application Architecture deals with the structure inside the application.  It covers aspects such as how the application connects with data bases and other data sources, how the business logic is organized, and how the presentation logic is organized.  As is typically the case, with increased decoupling of these layers of the design, the flexibility and maintainability of the application is greatly increased.

It is within EAA that the concepts of software organization known as “Design Patterns” apply.  This refers to a set of canonically-reused structures within software that have been identified and refined.  The first literature on this concept came out in the mid-1990’s. (more…)