You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add remote exec capability for foundation models missing it (#1968)
* add remote exec for foundation models missing it
* make style
* fix missing name in unit tests
* fix depth estimation endpoint to return propper base64
* allow passing model id explicitly in /infer/llm endpoint
* fix image returned by depth estimation block on remote exec
Copy file name to clipboardExpand all lines: docs/foundation/gaze.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,13 @@
2
2
3
3
You can detect the direction in which someone is looking using the L2CS-Net model.
4
4
5
+
## Execution Modes
6
+
7
+
L2CS-Net gaze detection supports both local and remote execution modes when used in workflows:
8
+
9
+
-**Local execution**: The model runs directly on your inference server
10
+
-**Remote execution**: The model can be invoked via HTTP API on a remote inference server using `detect_gazes()` client method
11
+
5
12
## How to Use L2CS-Net
6
13
7
14
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, <ahref="https://app.roboflow.com"target="_blank">sign up for a free Roboflow account</a>. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
Copy file name to clipboardExpand all lines: docs/foundation/sam2.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,13 @@
2
2
3
3
You can use Segment Anything 2 to identify the precise location of objects in an image. This process can generate masks for objects in an image iteratively, by specifying points to be included or discluded from the segmentation mask.
4
4
5
+
## Execution Modes
6
+
7
+
Segment Anything 2 supports both local and remote execution modes when used in workflows:
8
+
9
+
-**Local execution**: The model runs directly on your inference server (GPU strongly recommended)
10
+
-**Remote execution**: The model can be invoked via HTTP API on a remote inference server using the `sam2_segment_image()` client method
11
+
5
12
## How to Use Segment Anything
6
13
7
14
To use Segment Anything 2 with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, <ahref="https://app.roboflow.com"target="_blank">sign up for a free Roboflow account</a>. Then, retrieve your API key from the Roboflow dashboard.
Copy file name to clipboardExpand all lines: docs/foundation/sam3_3d.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,14 @@
2
2
3
3
3D object generation model that converts 2D images with masks into 3D assets (meshes and Gaussian splats).
4
4
5
-
This model is currenlty in Beta state! The model is only available if "SAM3_3D_OBJECTS_ENABLED" flag is on. The model can currently be ran using inference package, and also be used in Roboflow Worklows as a part of local inference server.
5
+
This model is currently in Beta state! The model is only available if "SAM3_3D_OBJECTS_ENABLED" flag is on. The model can currently be ran using inference package, and also be used in Roboflow Workflows as a part of local inference server.
6
+
7
+
## Execution Modes
8
+
9
+
SAM3-3D supports both local and remote execution modes when used in workflows:
10
+
11
+
-**Local execution**: The model runs directly on your inference server (32GB+ VRAM GPU strongly recommended)
12
+
-**Remote execution**: The model can be invoked via HTTP API on a remote inference server using the `sam3_3d_infer()` client method or the `/sam3_3d/infer` endpoint
6
13
7
14
## DISCLAIMER: In order to run this model you will need a 32GB+ VRAM GPU machine.
0 commit comments