Wednesday, August 31, 2016

Hello Sense and Respond, Bye Bye Command and Control

Transforming the retail industry with real-time insights

Guest Blog by 
Shalini Raghavan, 
Senior Director Product Management at FICO

It was a few years ago, my colleague Matt Beck, General Manager for FICO’s marketing solution, and I were discussing the challenges of doing business in an omnichannel world. Our ability to interact with customers across multiple communication channels is just one factor contributing to the fact that there is now more data than ever before. Yet not all businesses are able to distill the essence of this data into decisions of value. And even when they are able to discern a valid and valuable decision that would improve customer experience, frequently they are not able to do so in a way that would make it actionable at the point of decision with the consumer. The question is how do we “Sense and Respond” to what this data is telling us, such that we bring the right value at the right time to customer need?

Sense and Respond isn’t just some esoteric concept that came out of control theory (although some of FICO’s technology has its roots in missile defense – but that’s a story for another day). It’s about continuously looking at behaviors to understand the who, when, where, what and why. This information can be constantly used to adapt and improve decisions and outcomes. Think of it as continuous insights but without the offline analyses. This is insight in-stream, which can be put into action while those insights are still novel and the opportunities most timely.

So how does Sense and Respond get out of the theory books and become a reality?

#1 - Break down artificial barriers, when it comes to data
The first step is to break down artificial barriers, when it comes to data. Treating batch and streaming sources differently only limits our ability to tap into important signals that might help us improve decisions and outcomes. This is critical since both types of sources offer enormous value in generating more insight and improving decisions and actions —especially when used in combination.

#2 - Capture opportunities in each moment by instantly deriving actionable insights from data
The second step is to capture opportunities in each moment by instantly deriving actionable insights from data. Predictive and prescriptive analytics have long been used to power decisions and actions. Often these analytics are used in batch to extract decision value after an event has occurred making it impossible to respond in that moment with a decision of value. Making these analytics contextually aware while enabling them to extract the value of those moments in near-time and real time is key to driving our most valuable insights into customer value at the moment it will be most appreciated and valued.

#3 - Make systems more usable and manageable
The third step is to make systems that sense and respond more usable and manageable. So it is possible to unleash the power they offer while still maintaining a modest solution footprint and the kind of management controls and test frameworks needed to ensure quality results. Pairing real-time decisions with response and results dashboards is not only possible, but critical in ensuring the performance of your decision strategy.

We put these notions to the test in a marketing example for a retailer who was looking to revamp their existing marketing offerings. By their own accounts this retailer possessed ample and timely data on their customer. The results using traditional campaign type engagement methods were hit or miss, and they felt they were too slow in responding to consumer needs and market demands. A team of internal data scientists was spending time creating analytics that were clearly predictive, but which couldn’t be operationalized in real-time at the point of interaction.

How do you know which offer is the best one, at the current time, for each and every consumer?
Retailers carry a bevy of products that customers can choose from and this makes it a very interesting analytic problem. How do you know which one is the best one, at the current time, for each and every consumer? For several years FICO has had offerings that used predictive and prescriptive models to look at consumer behavior across products, as well as a myriad logistical and business constraints to optimally determine the right offer and timing. In many cases, the solution considers activity on many thousands of products and offers. For many of our customers, this offering has worked rather well as a batch solution tee-ing up offers for the next customer interaction. The clear need was for FICO to transform our batch-oriented solution into one that would serve the real-time needs of this retailer and enable them to generate similarly powerful insights for the current interaction.

Fortunately, we have been transforming both our analytics IP in this space in conjunction with key platform capabilities that enable real-time model execution at scale. We like others felt that “Data is that fourth factor”. So very early on we decided that our platform would be one that helps extract the value of all the moments in that data. We are data agnostic and don’t make artificial distinctions of batch and stream - we simply treat all data as a source that carries some valuable information that we may need to act upon immediately.

The most exciting piece is that we’ve been able to put the smarts that help sense and respond in a box. We make that happen through something we call contextual profiles. Using these profiles, we can add in the context that is so important to how we will sense and respond. Whether is it is the location of the customers, or changes in purchase behavior or their historical behavior – all of these types of data can be distilled into a contextual profile that can be used in combination with analytics for real-time action. We use these profiles in rules and predictive models all of which can run on those moments to drive action in real-time.

Our visual composition capabilities allowed us to build a fully working real-world solution in a few short weeks. No custom code was involved. Bringing in new sources of data, operationalizing a model and tuning the solution were all accomplished with a few clicks. We tested and swapped out models and it was as simple as swapping out PMML with a few more clicks. And if that sounds too good to be true, we could score many thousands of predictive models for a single customer on that bevy of products in 250 milliseconds on a very modest server. We compared this approach to a more traditional approach and were pleasantly surprised that we used 200x less compute power. And the use of contextual profiles reduced the number of required calculations, eliminating the need to go back to transactional databases to recalculate summarizations that had already been performed. Those of us who are close to our efforts know that we’ve been pushing for efficient “Sense and Respond" machines. But a 200x improvement was more than what we expected.

Many of us see benchmarks like this in a lab setting. But observing this successfully in a real-world problem is exciting. Using our technology, we were able to prove that Sense and Respond is real.

For all those marketers out there, consider bidding adieu to “Command and Control” style campaigns. Sense and Respond is here and you can use it to engage with your customers like never before. And in the coming months, you’ll see us transform more domains with our offering.

Tuesday, August 2, 2016

The Data Mining Group Releases PMML v4.3

Another key milestone for PMML and interoperability in data science!

The Data Mining Group (DMG), a vendor-led consortium of companies and organizations developing standards for statistical and data mining models, announced the general availability of version 4.3 of the Predictive Model Markup Language (PMML):

Chicago, IL 8/2/2016 – The Data Mining Group is proud to announce the release of PMML v4.3. PMML is an application and system independent XML interchange format for statistical and data mining models. The goal of the PMML standard is to encapsulate a model independent of applications or systems using an XML configuration file so that two different applications (the PMML Producer and Consumer) can use it.

“The PMML standard delivers true interoperability, enabling machine learning and predictive models to be deployed across IT platforms,” says Michael Zeller, CEO of Zementis, Inc. “A common standard ensures efficiency and fosters collaboration across organizational boundaries which is essential for data science to scale beyond its current use cases. With its latest release, PMML has matured to the point where it not only has extensive vendor support but also has become the backbone of many big data and streaming analytics applications.”

Read the full press release here

Friday, July 22, 2016

Effective Deployment of AI, Machine Learning and Predictive Models from R

Operational deployment in your business process is where AI, machine learning and predictive algorithms actually start generating measurable results and ROI for your organization. Therefore, the faster you are able deploy and use these “intelligent” models in your IT environment, the more your business will reap in the benefits of smarter decisions.

The Challenge

In the past, the operational deployment of AI, machine learning and predictive algorithms used to be a tedious, labor- and time-intensive task. Predictive and machine learning models, once built by the data science team, needed to be manually re-coded for enterprise deployment in operational IT systems. Only then predictive models could be used to effectively score new data in real-time streaming or big data batch applications.
As you can imagine, this process was prone to errors, could easily take up to six months or more, and it wasted valuable resources. Not only did it limit how fast models could be deployed, but also made it difficult to leverage more complex machine learning algorithms that could deliver more precise results.
Given such challenges, how can we achieve a more efficient model development life cycle, for example with R, which is one of the most popular open source data mining tools?

A Standards-based Solution

The answer is PMML, the Predictive Model Markup Language industry standard. PMML is an XML-based standard for the vendor-independent exchange of predictive analytics, data mining and machine learning models. Developed by the Data Mining Group, PMML has matured to the point where it now has extensive vendor support and has become the backbone of big data and streaming analytics. For today’s agile IT infrastructure, PMML delivers the necessary representational power for predictive models to be quickly and easily exchanged between systems.
One of the leading statistical modeling platform today is R. R allows for quick exploration of data, extraction of important features and has available a large variety of packages which give data scientists easy access to various modeling techniques. The ‘pmml’ package for R was created to allow data scientists to export their models, once constructed, to PMML format. The latest version of this package, v1.5, contains various new functions providing the modeler a more interactive access to the PMML constructed; they can now modify the PMML after it was constructed to a greater degree.
For the R experts among the readers, the following series of posts describes in more detail some of the new functions implemented and their uses:
  1. R PMML helper functions to modify the MiningField element attributes
  2. R PMML helper functions to modify the DataDictionary element attributes
  3. DataDictionary Helper Functions II
  4. PMML Post-processing: Output Helper Function
For a more basic introduction to R, we invite you to download a free infographic and white paper.

The next step, of course, would be to upload your own PMML models into an operational platform. If you are ready for that and want to see how easy it is to deploy and score your models, please check out the free trial of the ADAPA Decision Engine on the AWS Marketplace.

Monday, July 27, 2015

Zementis Announces Predictive Analytics Integrated with IBM z Systems for Insight-driven Business Processes

Quote from Ross Mauri 
Zementis, Inc. and IBM Corporation (NYSE: IBM) today announced a joint strategic initiative and corresponding technology solution, “Zementis for IBM z Systems”, designed for companies seeking to optimize business decisions and formulate those decisions faster. The solution seeks to unlock the full potential of an organization’s data assets by integrating predictive analytics capabilities directly into transactional data flows that drive business processes. 
The joint offering combines Zementis’ solution for high-speed development, deployment and operation of predictive analytics models with IBM z Systems, a family of next-generation mainframe computing platforms that help organizations reinvent enterprise IT to become digital businesses. Zementis for IBM z Systems enhances IBM’s capabilities by integrating predictive analytics directly into core business processes, delivering timely predictive insights directly to the point of maximum business impact.

Monday, July 20, 2015

Zementis: Big Data Insights through “True” Analytics

Scoring for Everyone and Everything 

In the past, the three constituents in the “making sense of data for the future” pool (data scientists, IT professionals and business users) have suffered from a lack of collaboration, inefficiencies in communication, and ineffectiveness at uncovering valuable insights for their enterprises.

Zementis presents a solution to help enterprises to bridge the gap between these three groups, or even better, to align them to create better insights for their enterprises. At the center of this effort is the portability of analytical models. Zementis has been a fervent supporter of PMML (Predictive Mode Mark-up Language) on its products. That is the leading analytical model portability standard.

This vendor profile provides an overview of Zementis and identifies key differentiators, product offerings, and a short list guide for buyers. 

Tuesday, June 23, 2015

Software AG and Zementis Announce Partnership and Integrated Solution for Predictive Analytics

Software AG, a leading global provider of software and IT services, today announced a business and technology partnership with Zementis, as part of Software AG’s strategy and market offerings for big data predictive analytics.
Software AG and Zementis share a common philosophy and vision for the potential of advanced predictive analytics, and have therefore decided to forge a formal partnership based on this mutual understanding. The relationship is both a technology integration and a collaborative business partnership.
As part of this effort, Software AG has embedded Zementis ADAPA, a standards-based deployment and scoring engine for predictive analytics, within the Software AG Apama Streaming Analytics platform. The two companies will also collaborate on technical solution development, business development and technology enablement for their customers worldwide, spanning multiple industry segments and use cases. The companies will focus initially on use cases related to:
  • Internet-of-Things
  • Banking and Insurance
  • Retail
  • Manufacturing
The announcement is part of Software AG’s effort to enrich and expand its Digital Business Platform – a service-oriented development environment with numerous high-level, cloud-capable application services.
The integrated solution is available now.
To read the Software AG press release, click here.
To read a related post on the Software AG corporate blog, click here.

Wednesday, March 11, 2015

Zementis and Cognizant Partner to Bring Advanced Analytics to Cognizant Clients

On March 10, Cognizant Technology Solutions and Zementis signed a formal partnership agreement, launching a collaborative relationship that will bring advanced analytics solutions that include Zementis technologies to Cognizant’s clients.
cognizant-technology_200x200The collaborative relationship will focus on integrating Zementis’ predictive analytics technologies with the advanced analytics capabilities of Cognizant Analytics, developing business solutions based on advanced analytics for Cognizant clients, and jointly enabling those solutions to deliver client success. The partnership is both a technology alliance and a channel (go-to-market) arrangement, and has global scope. Zementis and Cognizant Analytics will collaborate with each other and with Cognizant’s market-facing business segments to jointly serve Cognizant clients. Initial efforts will focus on two industry verticals: Banking & Financial Services and Life Sciences & Healthcare. Over time, the industry focus will expand to encompass Cognizant’s other industry solution groups: Insurance, Communications, Media & Technology and Products & Resources.
The partnership will encompass Zementis’ entire product portfolio, including its two core solutions, ADAPA® and UPPI, as well as multiple platform-specific variants. Cognizant clients will be able to deploy these solutions on-premise or in the cloud, with access via an intuitive Web-based console, via one of multiple industry-leading analytics platforms or as a simplified Hadoop interface.
For more information, read the press release.

Thursday, February 19, 2015

ADAPA for Azure: Enthusiasm in the Market, Excitement at Microsoft

In October of last year, Zementis was proud to announce that our flagship product ADAPA® had been certified for Microsoft’s Azure cloud platform. Well, we’re still proud of that accomplishment, and we’re even prouder to see strong interest in the market and enthusiastic support from Microsoft!

Since launching ADAPA on Azure, Zementis has observed a particularly high level of interest from existing and prospective customers, as well as from other key market players active in big data analytics. Companies, government entities and other types of organizations that rely on predictive analytics to give them accurate insights into future outcomes are clearly understanding the value of the Azure cloud, and also of using Zementis ADAPA to power their predictive analytics activities in the cloud.
Microsoft’s Azure leadership team is also weighing in on the value of ADAPA for Azure.
Kim Akers
Kim Akers
General Manager, Microsoft
“Microsoft is excited to welcome Zementis into the Azure community,” said Kim Akers, General Manager at Microsoft. “Zementis ADAPA on Azure enables users to develop and deploy predictive analytics models quickly, and then utilize the resulting predictive data in a variety of ways, both within the Azure ecosystem and beyond.”

Based on initial customer and market response, Zementis believes that ADAPA for Microsoft Azure will generate strong customer demand throughout 2015, laying the groundwork for further significant growth in 2016 and beyond.
“Companies that employ predictive analytics to inform their business decisions can benefit from ADAPA to significantly accelerate data-driven predictive insights, and by using ADAPA on Microsoft Azure, these companies also benefit from the full utility of Microsoft’s cloud platform and partner ecosystem,” says Dr. Michael Zeller, Chief Executive Officer of Zementis. “Together, ADAPA and Azure deliver insight, scalability, stability and security.”
For more information, please visit:

Tuesday, December 30, 2014

Scoring data with ADAPA using Pentaho Data Integration

Predictive model integration for MySQL, Microsoft SQL Server, Oracle and PostgreSQL

The main use of predictive models is to generate predictions for new data. This data frequently resides in databases like MySQL, and the ADAPA scoring engine needs a way to easily access it. One way of accomplishing this is by using the Pentaho Data Integration (PDI) tool, and in this post we outline how to score data from relational databases using the ADAPA REST API and PDI.

PDI provides an easy to use point-and-click interface to manage the whole workflow: retrieving the data, scoring it through ADAPA, and saving the results elsewhere. It is possible to use PDI to read and write to different databases, including MySQL, Microsoft SQL Server, Oracle, PostgreSQL, and others. PDI can also act as a client to the ADAPA Scoring Engine by leveraging the ADAPA REST API, and take care of transforming the data into necessary formats - JSON and URL in this case.

Prior to starting, we assume that:
  • PDI is installed
  • Data to be scored is stored in either MySQL, Microsoft SQL Server, Oracle or PostgreSQL
  • A PMML model for the data is deployed and available through the ADAPA REST API.

The process is built and executed in PDI. The transformation should consist of the following steps:
  • Retrieve data from the database
  • Transform to a JSON object
  • Convert the JSON object to a URL as a method to transmit it
  • Send URL to ADAPA through REST API
  • Capture ADAPA output
  • Write the scoring result back to a flat file

For detailed step-by-step instructions using a neural network model deployed in ADAPA, please review the following videos:

Monday, December 29, 2014

Using MySQL as a Client to the ADAPA Scoring Engine

Predictive analytics scoring with MySQL and ADAPA

In this blog post, we outline how to use a MySQL database as a client to the ADAPA Scoring Engine by leveraging the ADAPA REST API to execute a predictive analytics model based on the Predictive Model Markup Language (PMML) industry standard.

We assume that:
  • MySQL and cURL are installed
  • Necessary MySQL tables are already created
  • A PMML model for the data is deployed and available through the ADAPA
One option to make API calls from MySQL is by using the MySQL-UDF-HTTP package, which enables creation of user defined functions for HTTP REST Operations in a database. This package is available on Google Code and will be installed on top of MySQL. We can leverage the User Defined Functions (UDFs) created with this package to make REST API calls to ADAPA from MySQL. Specifically, we use HTTP GET requests to the ADAPA engine to score one record at a time. An advantage of using these functions is that we can easily write the scores back to the database.

In addition, the scoring process can be automated with database triggers. Triggers automatically execute database queries when specified events occur. In this case, we can write functions to score and update or insert records, and set triggers to execute these functions on update and insert events. The HTTP UDF is called by the scoring function to send a GET request to the ADAPA REST API.

Simply using SQL and UDFs, the above enables us to easily execute complex predictive analytics models directly from one of the most commonly used databases, score the records, and write the results back into a database table.

A step-by-step tutorial, including installing MySQL-UDF-HTTP and writing functions and triggers, is available in this video.

Copyright © 2009-2014 Zementis Incorporated. All rights reserved.

Privacy - Terms Of Use - Contact Us