The collaboration-first ML metadata store

Log, display, compare and share your Datasets, Models and Project Documentation in one single place.

@model

Version your models and metadata with a few lines of code.

Add a @model decorator to your training function and Layer will register and version the model. Load your versioned model with layer.get_model("my_model:2.1").

from layer.decorators import model

@model("california_housing")
def train():
 model = XGBClassifier()
 model.fit(X_train, y_train)
 layer.log({...})
return model

train()

@dataset

Reproducible pipelines?
Data versioning has you covered.

Add a @dataset decorator to your dataset build function, and Layer will register and version your training data for reproducible ML pipelines. Register pandas.DataFrame, Images or even a pytorch.Dataset. Load your versioned dataset for your training with layer.get_dataset("my_training_data:4.2").

from layer.decorators import dataset

@dataset("orders")
def build():
 data = {'col1':[1, 2], 'col2':[3, 4]}
 df = pd.DataFrame(data=data)
return df

build()

layer.log()

Keep track of the story,
log insights and project history.

You can log your parameters, charts, metrics, and plots to Layer. You can compare them between different versions of datasets and models, enabling experiment tracking and governance.

@model("object_detector")
def train():
 params = {"in":20,"out:30"}
 model = torch.nn.Linear(params["in"],
 params["out"])
 layer.log(params)
return model

train()

Projects

Bring it all together with a central repo and dynamic READMEs.

Projects are central repos for your ML entities: datasets and models. You can think of them like git repositories except they are specifically designed for machine learning from the first principles. You can register all your metadata to your Layer Project and share it with your team.