Select and Inference with Focoos Models#
This section covers how to perform inference using the Focoos Models on the cloud or locally using the focoos
library.
As a reference, the following example demonstrates how to perform inference using the fai-rtdetr-m-obj365
model, but you can use any of the models listed in the models section.
📈 See Focoos Models metrics#
You can see the metrics of the Focoos Models by calling the metrics
method on the model.
1 2 3 4 5 |
|
☁️ Cloud Inference#
Making inference on the cloud is straightforward, you just need to select the model you want to use and call the infer
method on your image. The image will be uploaded on the FocoosAI cloud, where the model will perform the inference and return the results.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
result
is a FocoosDetections object, containing a list of FocoosDet objects and optionally a dict of information about the latency of the inference.
The threshold
parameter is optional and defines the minimum confidence score for a detection to be considered valid (predictions with a confidence score lower than the threshold are discarded).
Optionally, you can preview the results by passing the annotate
parameter to the infer
method.
1 2 3 4 |
|
🤖 Local Inference#
Note
To perform local inference, you need to install the package with one of the extra modules ([cpu]
, [torch]
, [cuda]
, [tensorrt]
). See the installation page for more details.
You can perform inference locally by selecting the model you want to use and calling the infer
method on your image. If it's the first time you run the model locally, the model will be downloaded from the cloud and saved on your machine. Additionally, if you use CUDA or TensorRT, the model will be optimized for your GPU before running the inference (it can take few seconds, especially for TensorRT).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
result
is a FocoosDetections object, containing a list of FocoosDet objects and optionally a dict of information about the latency of the inference.
As for remote inference, you can pass the annotate
parameter to return a preview of the prediction.
🖼️ Cloud Inference with Gradio#
You can further use Gradio to create a web interface for your model.
First, install the dev
extra dependency.
pip install '.[dev]'
To use it, use an environment variable with your Focoos API key and run the app (you will select the model from the UI).
export FOCOOS_API_KEY_GRADIO=<YOUR-API-KEY>; python gradio/app.py