Welcome to the first product update of 2023. As of writing this, the family of Pipeline users has grown to a staggering 3,829 members!
Firstly - we want to say thank you for sticking with us as we have worked hard to improve our API reliability! As the provider of an API for production, we know that providing a top quality service that is fast, scalable and reliable is the number one requirement for our customers - and this will continue to be our top priority as we build out the feature-set in 2023.
As well as optimising uptime, our focus over the past 12 months has been to iterate and refine the proprietary platform that powers our serverless GPU API. The challenge of simultaneously managing thousands of models in production in a reliable, robust and reproducible way has driven us to deliver novel innovations in routing, caching and orchestration.
As a team we are thrilled with the results and with the positive response from users; many of whom have trusted us with their deployments as they scale their own AI products and services.
✅ Pipeline tags, i.e. versioning
So far when you wanted to update one of your deployed pipelines, you had to upload it with a different name, and although it shared a lot of the structure with one of your previous pipelines, these similarities were lost in your dashboard and it was treated as a completely different pipeline.
Similar to how Docker works, we have added tags to fix this. See an example below for one of our public pipelines:
Each of these new pipelines will have a different pipeline ID and tag but they will share the same name. From your dashboard, you will now be able to toggle between the different versions of your pipelines much better.
We recommend you start exploring this new approach for a better experience. To get started adding tags to your existing pipelines you can use our CLI. More about this on our docs.
Our development roadmap for Q1 includes further API optimisations and feature additions - many of which have been added in response to feedback and requests from users and include:
Call for Beta partners!
Pipeline is currently deployed on our own cloud which suits many of our users. However, we are now extending this capability and would love to talk with companies interested in running our internal platform but in their own private cloud (local cluster, remote cloud or hybrid cloud), providing the abstraction of infrastructure they want while giving maximum privacy and security.
If you are a company where this sounds interesting, please get in touch with me via email@example.com.
We will keep doing monthly product updates, but to keep up to date and for immediate support from our team, make sure to join our Discord server.
You can upload your custom ML pipelines using your favourite libraries, you can include your pre-processing step, your inference step and your post-processing step and we will handle scaling the computation immediately, whether it is for a few requests or for thousands. Learn how on our docs, we got a few tutorials. As always, if you have any questions, you can reach out to us on Discord.
That’s it for this month. Thanks for reading and we’ll be back with more features around next month. If you'd like to give us feedback on anything we've shipped this month, just get in touch - we'd love to hear from you!