Magic Machines AI Labs

Leadership Team

  • Paul Barba

    Paul Barba

    Chief Scientist

    Paul Barba's earliest memory is staring up a tree canopy, wondering why he was viewing the world through his own eyes, and not the eyes of the people around him. Since then, he's been constantly fascinated by the mysteries of the human mind. As an adult, he continued to pursue this interest at UMass Amherst, where he studied Computer Science. He was drawn into Machine Learning and AI, eager to understand how the core abilities of life could be abstracted into code and math.

    A decade of research and engineering on natural language processing and I’m still astounded by the intricacies and richness of human thought and experience. AI has come so far in the last 50 years, and still falls so short of the accomplishments we humans take for granted in our everyday lives. In the coming years I hope to examine that gap, push computers a step further ahead, understand just a little more about the human mind and experience. I feel so privileged to exist at such a pioneering moment of history, where I can watch a few great mysteries of science finally unveiled and play some small role in the progress of human understanding.
  • Brian Pinette

    Brian Pinette

    Lead AI Researcher

    Brian Pinette became drawn to AI in his first few weeks as a freshman at MIT, where he did a bachelor’s thesis in computer vision under David Marr.  He got caught up in neural network resurgence in the doctoral program at the University of Massachusetts/Amherst, where he worked with Andrew Barto and Richard Sutton, and received his Ph.D. in vision-guided robot navigation, working with Edward Riseman. Looking back, he is amazed that he has always been able to find jobs with some kind of AI component---natural language, 2D and 3D computer vision, intelligent agents for games, and machine learning---without ever having to leave the bucolic Pioneer Valley (where he has coached a local high-school robotics team for well over a decade).  He also helped develop large-scale search engines back when they were new and exciting, before they became a commodity.  His current goals are to make machine learning yet another commodity.

  • Al Hough

    Al Hough

    Lead AI Researcher

    Dr. Alfred A Hough is often compared to a giant floating brain in space... or at least he would be, if that brain were smart enough. Al is actually a real doctor, having earned his Ph.D in Computer Science from the University of Massachusetts in 1991. Since then, he's been enjoying turning sci-fi into reality, developing everything from search engines to 3D imaging radar to large-scale AI symbolic reasoning systems.

    As the lead AI Researcher at Lexalytics, he specializes in applications of machine learning to text, and is currently building a system for large-scale text analytics, powered by machine learning, but intended to be usable by non-experts.

Through its partnership with University of Massachusetts Amherst’s Center for Data Science, Lexalytics will work with faculty and staff on the underlying challenges necessary to make the AI building process easier.

Northwestern logo

With Northwestern University’s Medill School of Journalism, Media and Integrated Marketing Communications, Lexalytics will have access to the future marketers who will increasingly use AI technology as a key part of their jobs.

Swarm Intelligence/Emergent Behavior

Swarm Intelligence is behavior that emerges from the interaction of large numbers of smaller, specialized parts – where the sum is greater than the total of the parts, or put differently, where the sum of the parts becomes a thing of its own. Biological versions of swarm intelligence include simple things like bird flocking (based on simple separation and angular ‘calculations’), to the sophisticated behavior of a beehive (with different types of bees, dancing communication, and pheremonal communication), to a human business – even the most micromanaged business of any size leaves a certain amount of decision making power to the folks who aren’t in charge, and their decisions interact with and affect the decisions of other people in the organization – and the combined behavior manifests as the behavior of the business.

We are developing artificial intelligence “cards” – where each card will have specific functionality (e.g. training a deep learning model, or providing an interface for humans to give feedback), and each card will also have the ability to communicate state and get feedback from other cards. These cards will be arranged as necessary to provide the desired AI functionality. You could certainly get use out of an individual card – think of it as a module that you could use for a program or a script. But, the power is in how the cards communicate with other cards that they conjure up. This methodology both eliminates much of the work that’s done to build AI’s now – arranging the pieces with one-time or throwaway code, and will allow for self-structured systems.

Adaptive AI

Different machine learning algorithms require different tuning (via hyperparameter optimization) and widely varying amounts of data. We believe in choosing the algorithm based on what is best right then and then swapping up as more knowledge is acquired. For example, if you don’t have a lot of data, deep learning isn’t going to be the right choice for you for the first pass, but something like a Naïve Bayes model probably will get you to the first checkpoint. The overall system should take this into account and swap algorithms as you pass the amount of data necessary to train an algorithm that will provide better results, without the human having to explicitly tell it to do so.

Transfer Learning

Certain problems will have the same basic characteristics of other problems. So, say we’ve already trained for solution A, and solution B shares many of the same basic problem set. The advantage of transfer learning is that solution B will require much less data to train than did solution A.

Meta-learning

Meta-learning is learning how to learn, in other words. By abstracting the underlying machine learning “plumbing” into these “cards”, we can use the same machine learning algorithms to understand the behaviors and applicability of each algorithm, and stack the deck so that the system learns how to train itself with a minimum of human interaction. The promise of AI is not in systems that require massive people power to stand up, but instead in these systems that start with a little bit of knowledge, and self-direct – turning their learning algorithms onto themselves in a feedback loop that creates a resonant, robust system that can respond quickly to changes in the dataflow, consume the changed data, adapt, and improve themselves based on these new resources.

try the demo with a URL or plain text for a bit of what we do

Or call us at 1-800-377-8036

Try our demo

Or call us at 1-800-377-8036