Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help needed to integrate camera intrinsics with CMSampleBuffer in iOS using StreamVideo SDK #578

Closed
andreasteich opened this issue Oct 17, 2024 · 2 comments

Comments

@andreasteich
Copy link

What are you trying to achieve?

I’m currently trying to integrate the following code to retrieve camera intrinsics from the CMSampleBuffer to compute the field of view (FOV):

if let captureConnection = videoDataOutput.connection(with: .video) {
    captureConnection.isEnabled = true
    captureConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
nonisolated func computeFOV(_ sampleBuffer: CMSampleBuffer) -> Double? {
    guard let camData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) as? Data else { return nil }
    
    let intrinsics: matrix_float3x3? = camData.withUnsafeBytes { pointer in
        if let baseAddress = pointer.baseAddress {
            return baseAddress.assumingMemoryBound(to: matrix_float3x3.self).pointee
        }
        return nil
    }

    guard let intrinsics = intrinsics else { return nil }

    let fx = intrinsics[0][0]
    let w = 2 * intrinsics[2][0]
    return Double(atan2(w, 2 * fx))
}

However, I’m not very familiar with WebRTC on iOS, and I’m wondering where I can find the typical captureOutput with the CMSampleBuffer in the Sources -> StreamVideo package. I would appreciate any guidance or suggestions on where to integrate this functionality into the existing codebase.

Thanks for your help!

Best regards

If possible, how can you achieve this currently?

Maybe possible?

What would be the better way?

I don't know right now.

@andreasteich
Copy link
Author

Or let me ask differently. Is it possible to capture video using AVFoundation itself and passing the frames manually to videocapturer?

@ipavlidakis
Copy link
Collaborator

Hi @andreasteich,

That's a really interesting question. First things first, there is no way to capture frames and pass them to the VideoCapturer. That being said, you can get access on the captured frames in order to perform any processing/analysis your need by providing your own AVCaptureVideoDataOutput. You can do so by calling try await call.addVideoOutput(...).

Best regards,
Ilias

@andreasteich andreasteich closed this as not planned Won't fix, can't repro, duplicate, stale Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants