Drag and Drop your way to a new Machine Learning Model
Introducing PerceptiLabs, a Visual Modeling Tool for Machine Learning
In the past four to five years, with the growth of machine learning (ML) we’ve seen the number of available ML frameworks explode. TensorFlow has become a prominent player, especially when paired with languages and frameworks like Python and NumPy.
Regardless of the tools used, developing an ML model follows a similar process of acquiring data, building and training the model, and deploying the ML model for inference. Traditional programmatic approaches usually meant that building and training a model was a long and tedious process. For example, after building a few models ourselves, we realized that we needed to write boilerplate code to better understand, debug, and test the model. Not the best use of our time to find the ideal solution.
As ML and artificial intelligence (AI) become more widespread, we’ve also seen that ML development is no longer just for developers specializing in AI, but now a broader range of users including programmers, data engineers and scientists, as well as project managers and decision makers of various-sized organizations, who are tasked with planning, building, and using ML models. Those tasked with building the models are being pushed to find a solution ‘faster’ as well as explain exactly how the model works.
We experienced these types of challenges in our work so started to think of ideas to create a bridge between high-level APIs, low-level code, and intuitive user interfaces for the different types of users. At one point we considered creating a virtual reality experience! In the end it led us to the creation of our startup: PerceptiLabs, founded by myself, Martin Isaksson and my business partner, Robert Lundberg, to focus on making ML development easier, faster, yet powerful — like warp speed!
A Closer Look at PerceptiLabs
PerceptiLabs, which also happens to be the name of our first tool — a visual modeling tool for machine learning. With the tool, objects are visually dragged and dropped onto a workspace to form an ML model. These objects are then connected so that output from one node becomes the input to another. The result is an ML model that can be trained using input data, and then exported as a TensorFlow model:
PerceptiLabs currently offers the following categories of objects that can be added to a model:
- Data: specifies the sources of data (e.g., NumPy files) for the raw data and labels.
- Processing: performs processing such as transformations on input data (e.g., reshaping an array).
- Deep learning: provides deep learning methods such as convolution, fully connected, etc.
- Mathematics: provides common mathematical operations.
- Training: provides various training methods such as reinforcement learning.
- Classical Machine Learning: provides classical, non-ML methods such as K-Means clustering.
- Custom: allows you to create custom objects by writing Python code.
One of the key aspects of PerceptiLabs is visualization and transparency at all stages. For example, you can double click on an object to see how it has processed or transformed its input data. In this example the number 1 is shown which reflects the reshaped raw input data:
Constructing the model’s graph, creates TensorFlow code behind the scenes which you can preview and customize at any time:
Note that we’re also aiming to introduce Keras and PyTorch templates in the near future.
After the model’s graph has been constructed, you can “run” the model to train it. This displays a series of stats, that you can view for the model and for the individual objects that make up the model:
You can also run a test to see how the trained model performs on the test data and then iterate and refine your model as required. Having such insight and transparency into your model is a key requirement for debugging, tuning, and optimizing your model, and was an important goal for the beginning.
Once you’re satisfied with your model’s performance, you can then export the fully-trained TensorFlow model to perform inference in your projects.
Learning What This Means for Your ML Development
At its heart, PerceptiLabs is purely data-flow driven, allowing everyone from data scientists to software engineers, to easily understand the flow between nodes. At the same time, it remains powerful because developers can customize the underlying Python code.
Through this functionality, PerceptiLabs provides three main benefits:
- Fast modeling: the drag and drop interface, along with per-object previews, provide instant feedback. IO shape fitting is also at the core of channeling outputs to objects. Perceptilabs automatically calculates and keeps track of the dimension for each output, and ensures that it fits the input of the next layer, while still providing you the ability to configure this. Every component/layer is re-programmable and has a code view/editor and a high-level user interface. More generally, feedback is provided both visually and via the code editor, and the model’s infrastructure is taken care of, allowing for easy debugging and scalability for larger and more complex ML models.
- Increased transparency: PerceptiLabs provides visualizations, high-level abstractions, and low-level code. In addition to output metrics, there is full transparency during training, and real-time analytics in every operation and variable, which collectively unveil the “black box” of ML so you can better understand how the model is deriving its results. From the output, you can easily see and verify the ML model and decide if it should be exported or refined further. In addition, there is a full view of hardware performance (RAM, CPU, and GPU).
- Flexibility: Perceptilabs allows you to customize the environment, the statistics dashboard, and write custom code. Anything that can be written in Python can be written and executed in PerceptiLabs. You can also run PerceptiLabs calculations on your local development machine, on one of your on-premise servers, or in the cloud, which provides options for organizations of different sizes and infrastructure capabilities.
Looking Back on a new way Forward
So how did PerceptiLabs come about? Robert and I met the first day of our studies at the Royal Institute of Technology in 2012, and have worked together on projects ever since.
We started in Robert’s garage-based laboratory, working on many projects involving AI ranging from video games to robotics. It was during this time that we realized we wanted to create our own projects and build cool stuff.
We kicked off PerceptiLabs by doing a job for Spotify — teaching 25 data scientists to use ML on their big-data pipelines, helping them to build two models and putting them into production in their internal systems. After that, we decided to work full time on developing our ML platform, which we’ve done since the summer of 2017.
The first beta version of the platform, called QuantumNet, was released in March 2018. Robert and I built this version end-to-end by ourselves, albeit with a much more primitive user interface to start with. When released, the platform was downloaded approximately 150 times, and we were contacted by Microsoft, Nvidia, Amazon and Google — all within two weeks.
Since November 2018, the application has been called “PerceptiLabs”, and we have had alpha and beta testers reviewing our platform from these large enterprises. This effort has made it possible for PerceptiLabs to gain more insight into the user experience from feedback, and to further understand the pain points of ML modeling.
Moving Forward
To help get you started, PerceptiLabs includes sample data for the classic ML problem of identifying numbers from bitmap images of the digits 0 through 9. We’ve included an interactive tutorial within the application itself, that guides you on how to construct a model to solve this problem. We’ve also published a Getting Started Guide and a video tutorial.
We are still in the early stages of developing the PerceptiLabs application, so stay tuned as there is a lot more to come. In the meantime, we hope you enjoy using PerceptiLabs as much as we’re enjoying the journey of developing it for you!
Originally published at https://blog.perceptilabs.com.