Qwak now supports machine learning (ML) models feedback tracking.
ML models are trained on data, and understanding the difference between the model trained data and current consumed data is a complicated task. This new capability allows data scientists to monitor their actual model performance in real time and connect the deployed, invoked model to its real metrics.
There are some advantages of using feedback tracking for ML Models.
Once a model is deployed to Qwak using one of the deployment options, with Qwak Analytics enabled, all the input data and outputs of the model are automatically tracked.. The feedback API is used to send tagged labels and connect them to the same model ID. Qwak feedback joins the two data sources and calculates the model metrics.
To collect the model inference data, the Analytics feature should be enabled. But having the data saved in the Qwak lake isn’t enough; in order to close the feedback loop, an entity ID should be used. That entity ID will be the connection between an inference and a feedback data.
For instance, in our churn example, that can be created using the following command:
The predict function gets a dataframe that includes the entity ID, —in the churn model case, user_id — and drops it because it’s not needed for inference. The user_id will be logged in the Qwak lake, and it can be used for feedback.
Once data is saved in analytics, feedback calls can be sent as well.
The feedback calls are connected to a specific model and use the same entity_id that was saved by the model. Different tags can be used for different use cases. In our case, each time a user has churned, we’ll send one to the feedback API.
In the UI, an automatic visualization will be created:
A few options can be configured through the UI:
Using this simple visualization, a model performance can be calculated and used to validate whether a model is still relevant or not.
Get started for free today!