Lexalytics® Announces Text Analytics Suite Availability for Any Computing Environment

Lexalytics®, the leader in “words-first” machine learning and artificial intelligence, announced today that its entire text analytics and natural language processing (NLP) product suite is available for deployment in any computing environment—on-premises; private, public or hybrid cloud; or individual workstation. Previously, the only on-premises option from Lexalytics was its core Salience® text analytics libraries, which are integrated into existing customer or BI applications. Now, Semantria®, the company’s text analytics RESTful API, as well as Lexalytics Intelligence Platform™, a complete application for gathering, processing, modeling, analyzing and visualizing relevant information extracted from unstructured text, can also be deployed on-premises, in addition to a public or private-public cloud hybrid configuration.

While the majority of text analytics providers offer only a public cloud option, enterprise customers often require on-site, behind-the-firewall deployments for a variety of reasons, including more customizability, higher levels of security, lower levels of latency and when processing very high volumes of text data. Lexalytics has seen its on-premises business grow at twice the rate of its public cloud offering. Industry analyst firm IDC recently reported that spending on private cloud IT infrastructure would grow at approximately 23.3 percent year over year in 2018 and spending on non-cloud IT infrastructure would represent 52.6 percent of the market.

We’re seeing a lot of demand from analytics teams within enterprises for a full text analytics stack, not only in the public cloud, but also in on-prem and hybrid environments,” said Jeff Catlin, CEO of Lexalytics. “We’re pleased to be one of the only companies to offer a solution for processing unstructured data for any computer environment.

Micromodels

Today, Lexalytics also announced it is pioneering a new machine learning approach to text analytics that it is calling “micromodels.” In any text analytics application, there is generally a small subset of phrases, concepts and entities that are difficult to correctly score or extract with monolithic “macromodels.” These ambiguous terms can cause a drop in a system’s accuracy. For example, the word “tight” can mean many things in standard and vernacular English, from “strongly fixed,” such as, “The lid is tight,” to “cool,” as in, “That video was tight!” and can have positive, negative or neutral sentiment, depending on usage.

Text analytics companies have traditionally approached this problem by training one monolithic model with large amounts of data. With micromodels, Lexalytics can greatly improve accuracy by identifying the critical subset of terms unique to a particular customer or industry and creating micromodels for each term, dramatically reducing the amount of data and hours required to train the system. Lexalytics predicts that micromodels will approach 100 percent accuracy for certain words and phrases, beating a human’s comprehension and other systems currently in use.

“From 30 years of building predictive models, I find that the more targeted the model, the better it performs,” said Randy Hlavac, CEO, Marketing Synergy and Lecturer, Northwestern University Medill Integrated Marketing Communications. “That’s why micromodels make sense. Today, you need focused machine learning like Lexalytics provides to best learn about your high value markets. Their expertise and approach is why I use Lexalytics systems for all of my online classes at Northwestern.”

Pricing and Availability

Salience, Semantria and Lexalytics Intelligence Platform are available in any computing environment today. For more information and pricing, please contact sales@lexalytics.com

[1] IDC Press Release, “Cloud IT Infrastructure Revenues Surpassed Traditional IT Infrastructure Revenues for the First Time in the Third Quarter of 2018, According to IDC,” January 10, 2019