-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting Predictions over HTTP/HTTPS Stream #735
Comments
Hello -
|
Hi @PawelPeczek-Roboflow, can you give some code reference? |
Which particular scenario are you asking for? |
for the 1st one: |
well that would be just sending request using |
yes, it will be on 127.0.0.1:80 |
ok, but this does not tell anything about the endpoint accepting data. Could you share something like Swagger? |
I didn't understand the question. Can you give some reference? |
the REST API accepts requests formatted in a very specific way - dependent on the code of the service exposing REST API. |
Not knowing what is the spec of the endpoint makes it barely impossible / very hard to create a client to call the API |
Can you please mention what an endpoint means here? @PawelPeczek-Roboflow |
I believe this is quite a good article that may help: https://beeceptor.com/docs/concepts/http-endpoints/ |
Search before asking
Question
Hi there, i am using below code for inference on my RPi in headless mode.
import the InferencePipeline interface
from inference import InferencePipeline
import a built in sink called render_boxes (sinks are the logic that happens after inference)
from inference.core.interfaces.stream.sinks import render_boxes
create an inference pipeline object
pipeline = InferencePipeline.init(
model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
video_reference="rtsp://192.168.1.100:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
api_key="", # provide your roboflow api key for loading models from the roboflow api
)
start the pipeline
pipeline.start()
wait for the pipeline to finish
pipeline.join()
how can I retrieve predictions over HTTP/HTTPS. if I need to initiate UDP Sink?
Additional
NA
The text was updated successfully, but these errors were encountered: