Build, organise, and distribute your ML pipelines

A flexible library to code your ML workflow, deploy locally or to our serverless cloud

Join BETA waitlist

Thank you for joining the waitlist! If you are selected, we will be in touch shortly.
Oops! Something went wrong while submitting the form.
Please refresh the page or try again later.

An open-source library to manage the end to end of your ML pipelines.

Instantly monitor and compute your pipelines with our serverless infrastructure.

backed by

Open-source library to build your ML pipelines

Organise your code in re-usable functions and pipelines and deploy locally.

An environment for each pipeline

Each pipeline is enclosed with all libraries and dependencies required to run your code.

A data snapshot per run

Create a snapshot of all data and code on each run so you can inspect and reproduce your code.

Deploy to your infrastructure

Automatically generate an endpoint and docker files to run your pipeline in your infrastructure.

Out-of-the-box pipelines

Built-in pipelines and functions to facilitate data processing, model fine-tuning and much more.

.py

Copy

1

2

3

4

5

6

7

8

9

10

11

12

13

14

from pipeline import function

@function
def xgboost_predict(data: dict) -> list:
    """
    Run predictions with my xgboost model
    """

    # Use custom pipeline function to load cached xgboost model    
    xb_model = pipeline.XGBModel.load_remote("mystic://project/xgboost_model")
    y_pred = xb_model.predict(data["x_data"])    
    return y_pred

.py

Copy

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

from pipeline import Pipeline
from pipeline.objects import File
from pipeline.models.hugging_face import TransformersModel

# Finetune GPT-J for your application
with Pipeline(pipeline_name="GPT-J_finetune") as pipeline:    
    my_data = File(type="path", is_input=True)    
    hf_model = TransformersModel("EleutherAI/gpt-j-6B")    
    finetuned_model = hf_model.train(my_data)    
    pipeline.output(finetuned_model)

gptj_pipeline = Pipeline.get_pipeline("GPT-J_finetune")
finetuned_model = gptj_pipeline.run("my_data/data.csv")

.py

Copy

1

2

3

4

5

6

7

8

from pipeline import get_pipeline
from pipeline.docker import create_api

gpt_j_pipeline = get_pipeline("GPT-J_inference_fp16")
docker_path = create_api(pipelines=[gpt_j_pipeline])

Serverless cloud for your ML pipelines

Instant access to scalable infrastructure and monitoring tools.

Monitor your runs

Access our dashboard to keep track of your functions, models and pipelines.

Serverless infrastructure, pay per second

Access our cloud with CPUs and GPUs optimised to compute from your lightweight analytics models to heavy deep learning models.

Join BETA waitlist

Thank you for joining the waitlist! If you are selected, we will be in touch shortly.
Oops! Something went wrong while submitting the form.
Please refresh the page or try again later.