Create and Train Model#
This section covers the steps to create a model and train it in the cloud using the focoos
library. The following example demonstrates how to interact with the Focoos API to manage models, datasets, and training jobs.
In this guide, we will perform the following steps:
1. Select dataset#
You can list publicly shared datasets using the following code:
1 2 3 4 5 6 7 8 9 10 11 |
|
To view only your personal datasets, use the following code:
1 2 3 4 5 6 7 8 9 10 11 |
|
Note
If you haven’t uploaded a dataset yet, you can follow this guide: How to load a dataset
Once you've identified the dataset you want to use, you’ll need its reference dataset_ref
to train your model. You can either copy it or store it in a variable like this:
1 |
|
2. Create model#
The first step to personalize your model is to create a model.
You can create a model by calling the new_model
method on the Focoos
object. You can choose the model you want to personalize from the list of Focoos Models available on the platform. Make sure to select the correct model for your task.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 |
|
RemoteModel
object that you can use to train the model and to perform remote inference.
3. Train model#
Once the model is created, you can start the training process by calling the train
method on the model object.
1 2 3 4 5 6 7 8 9 10 |
|
dataset_ref
see the step 2.
You can further customize the training process by passing additional parameters to the train
method (such as the instance type, the volume size, the maximum runtime, etc.) or use additional hyperparameters (see the list available hyperparameters).
Futhermore, you can monitor the training progress by polling the training status. Use the notebook_monitor_train
method on a jupyter notebook:
1 |
|
You can also get the training logs by calling the train_logs
method:
1 2 |
|
Finally, if for some reason you need to cancel the training, you can do so by calling the stop_training
method:
1 |
|
4. Visualize training metrics#
You can visualize the training metrics by calling the metrics
method:
1 2 3 |
|
Metrics
that you can use to visualize the training metrics using a MetricsVisualizer
object.
On notebooks, you can also plot the metrics by calling the notebook_plot_training_metrics
method:
1 |
|
5. Test model#
Remote inference#
Once the training is over, you can test your model using remote inference by calling the infer
method on the model object.
1 2 3 4 5 6 7 8 |
|
result
is a FocoosDetections object, containing a list of FocoosDet objects and optionally a dict of information about the latency of the inference.
The threshold
parameter is optional and defines the minimum confidence score for a detection to be considered valid (predictions with a confidence score lower than the threshold are discarded).
Optionally, you can preview the results by passing the annotate
parameter to the infer
method.
1 2 3 4 |
|
Local inference#
Note
To perform local inference, you need to install the package with one of the extra modules ([cpu]
, [torch]
, [cuda]
, [tensorrt]
). See the installation page for more details.
You can perform inference locally by getting the LocalModel
you already trained and calling the infer
method on your image. If it's the first time you run the model locally, the model will be downloaded from the cloud and saved on your machine. Additionally, if you use CUDA or TensorRT, the model will be optimized for your GPU before running the inference (it can take few seconds, especially for TensorRT).
1 2 3 4 5 6 7 8 9 10 |
|
annotate
parameter to return a preview of the prediction and play with the threshold
parameter to change the minimum confidence score for a detection to be considered valid.