Layer is the collaboration-first metadata store for production ML that enables build, train and track all of your machine learning project metadata including ML models and datasets with semantic versioning, extensive artifact logging and dynamic reporting with local↔cloud training.
We are soft-launching today! I’ve been working on Layer for the past 2 years with an awesome team around the world. We really poured our hearts and minds into Layer and hope you will like it. Your feedback would be very appreciated!
To get started, you can simply run our Quickstart Example!
How is Layer different from other tools?
- Although there are plenty of ML and DS tooling products, we believe that there is still a large gap around collaboration. Many data science projects are hosted on GitHub, which, in our experience, does not provide sufficient depth and abstractions for ML/DS projects.
- We don’t want you to change how you develop your ML projects. No need to use a special remote notebook, no need to create YAML files or learn a new scripting language just to integrate a new tool into your stack. You can use Layer in your local notebook or Python script.
- We tried to hit a sweet spot for the level of abstraction for complex ML pipelines. Models and Datasets are first-class citizens in Layer.
- Layer can be easily integrated into your existing codebase. Adding Layer decorators on top of your existing functions is easy and straightforward.
What can you do with Layer?
- Build, train and track your machine learning projects.
- Use remote GPU-enabled containers (Free 30 hours/week) to train your models with the help of @fabric decorator from your local notebook.
- Create dynamic Project Cards inserting comparison metrics, parameters, plots, images and tables.
- With a single line of code, load models or datasets from publicly shared community projects for your next ML project.
What’s under the hood?
At Layer, there are two modes for executing your projects: local and remote. To make your training function execute in local mode you just add a @model decorator to it. This decorator attaches special metadata that Layer uses. When you call such a decorated function as you normally would, it will be executed on your machine. In addition to that, Layer will use the attached metadata and the result of that function call to register the returned model into our model catalog.
To make a function execute in remote mode, you decorate it with the @model decorator and pass the function to:
This will make the function run remotely in a container on the Layer infrastructure. This works by leveraging the metadata attached with the decorator and pickling/unpickling your function in the dedicated container for its execution. Then the function is called and its returned result is saved into your project.