The collaboration-first ML metadata store
Log, display, compare and share your Datasets, Models and Project Documentation in one single place.
Track and reproduce experiments, effortlessly.
Record your learnings, datasets and models along the project journey with layer.log(), @dataset and @model decorators. Layer automatically versions your entities so that you can precisely repeat experiments and track their history.
Learn more→from layer.decorators import model
@model("california_housing")
def train():
model = XGBClassifier()
model.fit(X_train, y_train)
layer.log({...})
return model
train()
All in one place — never work in a silo again.
Close the collaboration gap and ease new hire onboarding with central repos for your datasets, ML models, metadata, and documentation. Log parameters, charts, metrics and more; quickly locate ML entities, create project documentation, and handover seamlessly.
Learn more→

Worried about reliable pipelines?Don’t be.
Unreliable production pipelines suck. Layer automatically versions datasets and models as the sanity check on pipelines running in production.
Layer also helps you to track performance shifts such as model performance drift or data quality issues with simple assertions like assert_not_null() and assert_true().
from layer.decorators import dataset
@dataset("product-data")
@assert_skewness("Price", -0.3, 0.3)
def create_my_dataset():
columns = ["Id", "Product", "Price"]
df = pd.DataFrame(data, columns=columns)
return df


More computing power? You got it.
Lack of computing power limits complex projects. Layer’s cloud resources allow you to train your models on powerful pre-defined GPU and CPUs. Layer handles all the pain and complexity of infrastructure. If you don’t need the power, use your own resources.
Learn more→import layer
@layer.model("california_housing")
@layer.fabric("cpu-cluster")
def train():
model = XGBClassifier()
model.fit(X_train, y_train)
layer.log({...})
return model
layer.run([train])
