Any sufficiently advanced technology is indistinguishable from magic.- Arthur C. Clarke
The Lexalytics Magic Machines™ initiative aims to deliver flexible, self-driven machine learning systems that work with humans to produce specific AI solutions for solving business problems at any scale. Our world-class data scientists have partnered with University of Massachusetts and Northwestern University faculty and students to revolutionize the way organizations approach using AI in business and academic settings. We’ll bridge the gap between data scientists and enterprises by providing intuitive, easy-to-deploy AI tools that anyone can conjure with a minimum of effort.
The promise of artificial intelligence is not in systems that require massive human support, but in those systems that stand by themselves. An AI should be able to start with a small bit of knowledge and then turn its learning algorithms onto itself to generate a feedback loop that results in a robust, agile system. This system should respond quickly to changes in the flow of information by adapting to new data and improving itself based on newly-available resources. Most of all, AI should be able to do this while delivering quantifiable business outcomes with a minimum of human direction.
The breakthrough technologies researched by Magic Machines AI Labs have already accelerated our product development cycle. Stay tuned over the coming months as we announce products that empower data scientists and enable business intelligence professionals to quickly and easily shape powerful AI systems.
Paul Barba pursued his interest in the mysteries of the human mind at the University of Massachusetts Amherst, where he studied Computer Science. Eager to understand how the core abilities of life could be abstracted into code and math, he naturally gravitated towards Machine Learning and AI. After a decade of research and engineering in natural language processing, Paul is still astounded by the intricacies and richness of human thought and experience. In the coming years, he hopes to push computers further towards parity with the human mind and drive a new wave of technological advancement.
Brian Pinette was drawn to AI in his first weeks as a freshman at MIT, where he completed a bachelor’s thesis in computer vision, but it was during his doctoral program in vision-guided robot navigation at the University of Massachusetts Amherst where he got caught up in the resurgence of neural networks. Since then, every job he’s worked has had an AI component (natural language, 2D and 3D computer vision, intelligent agents for games, and machine learning, to name a few). At Lexalytics, Brian works to make machine learning systems yet another commodity, like search engines and web services.
Alfred Hough earned his Ph.D. in Computer Science from the University of Massachusetts in 1991. Since then, he's been busy developing everything from search engines to 3D imaging radar to large-scale AI symbolic reasoning systems. At Lexalytics, Al specializes in applying machine learning techniques to text. His current work involves building a scalable machine learning system to drive large-scale text analytics for non-technical users.
Through our partnership with the University of Massachusetts Amherst’s Center for Data Science, Lexalytics works with faculty and staff on the underlying challenges necessary to make the AI building process easier.
By partnering with Northwestern University’s Medill School of Journalism, Media and Integrated Marketing Communications, Lexalytics gains access to the future marketers who will increasingly use AI technology as a key part of their jobs.
We’re developing AI “modules” that act together to form an intelligence that’s greater than the sum of its parts. Each module has a specific functionality (such as training a deep learning model, or providing an interface), and can be swapped out or rearranged relative to other modules. Individual modules are fully functional and provide value on their own, but their real power is in how they work together to form an AI swarm intelligence. Like individual birds in a flock, each module adjusts its behavior in real-time based on feedback from the other components. Individually and collectively, the modules work to achieve whatever larger AI functionality you desire. This modular method will revolutionize the way AI implementations are conceived and constructed at any scale.
Both the type and quantity of information being processed are important factors in choosing a machine learning model, but the flow of data isn’t always consistent. A machine learning algorithm that suits today’s dataset may be sub-optimal tomorrow. Instead of stubbornly sticking with one algorithm, we believe in flexibility. Just like humans, computers should change their approach when presented with new kinds of information under new conditions. We envision a fully adaptive AI that will start from scratch using one model and then, without a human telling it to do so, swap algorithms when it determines that a different model will provide better results.
Adaptive swarm intelligences often need periodic guidance during their early stages. We’re developing higher-layer AI models dedicated to watching a fledgling swarm’s behavior and helping it coalesce into a focused, effective system. Think of it like an AI coach, nudging its charges back on track when they stray off course. Taking this approach means we get the best of both worlds: self-assembling swarms governed by higher powers that give specific instructions to maintain keep each swarm focused on its purpose.
Why reinvent the wheel? Given two software problems that share some characteristics, certain elements of a solution to Problem A can carry over to Problem B. We’re developing AI algorithms that identify those similarities and utilize prior solutions in new ways. By re-using or recycling similar solutions for similar problems, we’ll dramatically reduce the time it takes to develop new AI integrations.
We’re abstracting the machine learning techniques that underlie our AI “modules” to serve as a starting point for a meta-learning feedback loop. Using these abstracted algorithms, our software systems will analyze their own behaviors, determine the applicability of individual elements, and modify their actions to further improve the learning process. Optimizing how our machine learning algorithms self-direct will improve performance and usher in a new era of AI efficiency.