Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Radio station APP iOS
Hi everyone, I'm the owner of a radio station called Radio Krimi and we have an official APP on iOS but because the technician, don't replied anymore to our message, we would like to update it with a new audio link. Then deeply sorry but I really don't know how to do it, basically it sould be easy because is a just a new link instead an old one. Please someone could help us with the process ? Thanks a lot ! Seb https://apps.apple.com/fr/app/radio-krimi/id1034088733
1
0
822
4w
How should playback readiness be determined with AVSampleBufferAudioRenderer when using AirPlay?
I’m implementing a custom playback pipeline using AVSampleBufferAudioRenderer together with AVSampleBufferRenderSynchronizer. hasSufficientMediaDataForReliablePlaybackStart appears to be the intended signal for determining when enough media has been queued to start playback. For local playback, this works well in practice — the property becomes true after a reasonable amount of media is enqueued. However, when the output route is AirPlay, using this property becomes difficult: AirPlay requires significantly more buffered media before the renderer reports sufficient data. The required preroll amount is much larger than for local playback. For short assets, it is possible to enqueue the entire audio track and still never observe hasSufficientMediaDataForReliablePlaybackStart == true. In that situation there is no more media data to enqueue, but the renderer still reports that playback is not ready. Given this behavior, what is the recommended way to determine playback readiness when using AVSampleBufferAudioRenderer with AirPlay?
0
0
383
Mar ’26
MusicKit can't find identifiers
I am trying to create keys for my personal project with MusicKit and other resources, but MusicKit specifically for now. I want to gather my recent music history and log the time in my system to measure with my other life data to do analysis on. I have created an Identifier with an appropriate Description and Bundle ID and have MusicKit checked in AppServices. I have saved and reset cash and waited all day and the keys have still not update and show "There are no identifiers available that can be associated with the key" in this field. Please help!
0
0
98
Mar ’26
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
1
0
515
Mar ’26
Offline Fairplay Error -42650
We have implemented offline Fairplay playback and it works fine.But at times when trying to playback the offline downloaded content, we get the following error"An unknown error occured (-42650)"Tried looking up the error in the documentation but couldnt find anything relevant.What could possibly be creating this error?
6
2
3.3k
Mar ’26
How to get iCloud item(photo/video) size?
How I can get iCloud photo file size? Could I use private API like this in prod? Does anyone know another way? (without downloading the file to the device) func getFileSize(asset: PHAsset) -> Int64? { let resources = PHAssetResource.assetResources(for: asset) let resource = resources.first let size = resource?.value(forKey: "fileSize") as? Int64 return size }
1
0
154
Mar ’26
Unexpected Ambisonics format
When trying to load an ambisonics file using this project: https://github.com/robertncoomber/NativeiOSAmbisonicPlayback/ I get "Unexpected Ambisonics format". Interestingly, loading a 3rd order ambisonics file works fine: let ambisonicLayoutTag = kAudioChannelLayoutTag_HOA_ACN_SN3D | 16 let AmbisonicLayout = AVAudioChannelLayout(layoutTag: ambisonicLayoutTag) let StereoLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Stereo) So it's purely related to the kAudioChannelLayoutTag_Ambisonic_B_Format
0
0
53
Mar ’26
AU MIDI Plugin UI not showing
Hello, I am having an issue with a very small percentage of our users not being able to view the UI of our MIDI Plugin Chord Prism. I have looked this up and seen to where it has been resolved within Logic for AU Instrument and Effect plugins by switching out of "Controls" view, but my situation is different and there is no option on what is displayed to switch out of "Controls" view. Is this something that can be fixed by adjusting settings within Logic?
0
0
156
Mar ’26
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When trying to access a job's resource, the resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
1
0
278
Mar ’26
Does PhotoKit provide access to People, Places, and Shared Albums?
I know how to search for Smart Albums (favorites, selfies, etc...) containing photos: // Get smart albums PHFetchResult *smartAlbums = [PHAssetCollection fetchAssetCollectionsWithType:PHAssetCollectionTypeSmartAlbum subtype:PHAssetCollectionSubtypeAlbumRegular options:nil]; I have the following questions: Is there a way to enumerate the People Smart Albums and access the photos in a specific People Smart Album? Is there a way to enumerate the Places Smart Albums and access the photos in a specific Place Smart Album? Is there a way to enumerate Shared Albums (shared to the current iCloud user) and access the photos in a specific Shared Album?
1
2
771
Mar ’26
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When in this function, I am trying to access a job's resource. The resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
0
0
140
Mar ’26
UVC over MFi – Is there official support? Implementation guidance?
Hello everyone, I’m looking for more detailed information regarding UVC (USB Video Class) over MFi within the Apple ecosystem and would appreciate some clarification. I’m interested in developing (or interfacing with) an accessory that transmits video over USB using the UVC standard, and I’d like to better understand how this works within the MFi (Made for iPhone) program. Here are my main questions: 1. Do iOS devices provide native support for UVC over USB-C or Lightning within the MFi framework? 2. Are there any specific firmware or authentication requirements when the accessory is MFi-certified? 3. Does UVC support depend solely on the hardware interface (USB-C vs Lightning), or are there additional software-level requirements? 4. Is there any official documentation outlining the recommended flow for implementing UVC-based video capture accessories on iOS? From what I understand, USB-C iPads appear to offer more direct support for standard UVC devices, but it’s not entirely clear how this integrates with the MFi ecosystem with iOS, especially for commercial product development. If anyone has gone through this process or can point me to relevant technical documentation, I would greatly appreciate the guidance. Thank you!
2
0
372
Mar ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
0
0
236
Mar ’26
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
1
0
318
Mar ’26
Final Cut Pro not loading my plugin
I have exhausted standard debugging approaches and need guidance on Final Cut Pro's AU plugin loading behavior. PLUGIN OVERVIEW My plugin is an AUv2 audio plugin built with JUCE 8. The plugin loads and functions correctly in: Pro Tools (AAX) Media Composer (AAX) Reaper (AU + VST3) Logic Pro (AU) GarageBand (AU) Audacity (AU) DaVinci Resolve / Fairlight (AU) Harrison Mixbus (AU) Ableton Live (AU) Cubase (VST3) Nuendo (VST3) It does NOT load in Final Cut Pro (tested on 10.8.x, macOS 14.6 and 15.2). DIAGNOSTICS COMPLETED auval passes cleanly: auval -v aufx Hwhy Manu → AU VALIDATION SUCCEEDED Plugin is notarized and stapled: xcrun stapler validate → The validate action worked spctl --assess --type exec → Note: returns 'not an app' which we understand is expected for .component bundles. Hardened Runtime is enabled on the bundle. We identified that our Info.plist contained a 'resourceUsage' dictionary in the AudioComponents entry. We found via system logs that this was setting au_componentFlags = 2 (kAudioComponentFlag_SandboxSafe), causing FCP to attempt loading the plugin in-process inside its sandbox, where our UDP networking is denied. We removed the resourceUsage dict, confirmed au_componentFlags = 0 in the CAReportingClient log, and FCP now loads the plugin out-of-process via AUHostingServiceXPC_arrow. Despite au_componentFlags = 0 and out-of-process loading confirmed, the plugin still does not appear in FCP's effects browser. We also identified and fixed a channel layout issue — our isBusesLayoutSupported was not returning true for 5-channel layouts (which FCP uses internally). This is now fixed. AU cache has been fully cleared: ~/Library/Caches/com.apple.audio.AudioComponentRegistrar /Library/Caches/com.apple.audio.AudioComponentRegistrar coreaudiod and AudioComponentRegistrar killed to force rescan auval -a run to force re-registration
0
0
249
Mar ’26
AVSpeechSynthesizer read Mandarin as Cantonese(iOS 26 beta 3))
In iOS 26, AVSpeechSynthesizer read Mandarin into Cantonese pronunciation. No matter how you set the language, and change the settings of my phone system, it doesn't work. let utterance = AVSpeechUtterance(string: "你好啊") //let voice = AVSpeechSynthesisVoice(language: "zh-CN") // not work let voice = AVSpeechSynthesisVoice(language: "zh-Hans") // not work too utterance.voice = voice et synth = AVSpeechSynthesizer() synth.speak(utterance)
3
0
629
Mar ’26
Understanding CMIO Extension
Hello, I am getting the following errors when building a Mac Camera Extension with web sockets. I am using URLSessionWebsocketTask as my web socket library. I built a test program for my code and in there I can see my web sockets are working properly, but when I run it from the System Extension I get the following errors. The socket opens for two - three messages then crashes. I couldnt find any documentation online for the following errors CMIOExtensionProvider.m:1975:-[CMIOExtensionProvider removeProviderContext:]_block_invoke Unregistered provider context <CMIOExtensionProviderContext: ->, don't be surprised if things go badly CMIOExtensionProviderContext.m:64:-[CMIOExtensionProviderContext initWithConnection:]_block_invoke [391] received Connection invalid``
7
0
2.4k
Mar ’26
Virtual Camera Shows Jittering Frames and Solid Accent Color on macOS
Hello Apple Developer Support, I’m developing a virtual camera using the CMIOExtensionDevice / CMIOExtensionStreamSource APIs on macOS. While the virtual camera appears in system settings and apps like Zoom and Google Meet, the video output exhibits the following issues: Jittering frames: The first frame sometimes appears correctly, but subsequent frames flicker or jitter. Solid color fill: Eventually, the camera feed fills entirely with a solid accent color (e.g., blue), rather than the intended video content. Console logs: Repeated messages appear in Console.app: Invalid display 0x00000000 Setup details: The virtual camera is created using CMIOExtensionDevice and CMIOExtensionStream. Video frames are rendered from NSImage/CGImage using CGContext and copied into CVPixelBuffers. Frame delivery is controlled by a DispatchSourceTimer at 60 FPS. macOS version: 26.2 Xcode version: 26.1 Observations: The Invalid display 0x00000000 logs suggest that CGContext drawing or NSImage operations are failing in headless mode (i.e., there is no real display attached to the virtual camera). Using CIContext with .useSoftwareRenderer = true appears to mitigate some flicker, but not entirely. Questions / Requests: Is it expected that CoreMediaIO virtual cameras cannot reliably render CGImage / NSImage frames offscreen? Are there recommended APIs or approaches to render virtual camera frames fully headless to avoid display-dependent jitter? Is there any documentation or sample code from Apple showing stable video output from a virtual camera extension that does not rely on a physical display? Any guidance or examples would be greatly appreciated. This issue prevents the virtual camera from being used reliably in standard video apps. Thank you, Savvy
1
0
115
Mar ’26
Radio station APP iOS
Hi everyone, I'm the owner of a radio station called Radio Krimi and we have an official APP on iOS but because the technician, don't replied anymore to our message, we would like to update it with a new audio link. Then deeply sorry but I really don't know how to do it, basically it sould be easy because is a just a new link instead an old one. Please someone could help us with the process ? Thanks a lot ! Seb https://apps.apple.com/fr/app/radio-krimi/id1034088733
Replies
1
Boosts
0
Views
822
Activity
4w
How should playback readiness be determined with AVSampleBufferAudioRenderer when using AirPlay?
I’m implementing a custom playback pipeline using AVSampleBufferAudioRenderer together with AVSampleBufferRenderSynchronizer. hasSufficientMediaDataForReliablePlaybackStart appears to be the intended signal for determining when enough media has been queued to start playback. For local playback, this works well in practice — the property becomes true after a reasonable amount of media is enqueued. However, when the output route is AirPlay, using this property becomes difficult: AirPlay requires significantly more buffered media before the renderer reports sufficient data. The required preroll amount is much larger than for local playback. For short assets, it is possible to enqueue the entire audio track and still never observe hasSufficientMediaDataForReliablePlaybackStart == true. In that situation there is no more media data to enqueue, but the renderer still reports that playback is not ready. Given this behavior, what is the recommended way to determine playback readiness when using AVSampleBufferAudioRenderer with AirPlay?
Replies
0
Boosts
0
Views
383
Activity
Mar ’26
MusicKit can't find identifiers
I am trying to create keys for my personal project with MusicKit and other resources, but MusicKit specifically for now. I want to gather my recent music history and log the time in my system to measure with my other life data to do analysis on. I have created an Identifier with an appropriate Description and Bundle ID and have MusicKit checked in AppServices. I have saved and reset cash and waited all day and the keys have still not update and show "There are no identifiers available that can be associated with the key" in this field. Please help!
Replies
0
Boosts
0
Views
98
Activity
Mar ’26
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
Replies
1
Boosts
0
Views
515
Activity
Mar ’26
Offline Fairplay Error -42650
We have implemented offline Fairplay playback and it works fine.But at times when trying to playback the offline downloaded content, we get the following error"An unknown error occured (-42650)"Tried looking up the error in the documentation but couldnt find anything relevant.What could possibly be creating this error?
Replies
6
Boosts
2
Views
3.3k
Activity
Mar ’26
offline with FairPlay
I am working on offline license support for FairPlay. I am trying to identify offline vs. live streaming requests. How do I do that? Is there any way to identify the request type (offline or streaming) from SPC?
Replies
0
Boosts
1
Views
101
Activity
Mar ’26
How to get iCloud item(photo/video) size?
How I can get iCloud photo file size? Could I use private API like this in prod? Does anyone know another way? (without downloading the file to the device) func getFileSize(asset: PHAsset) -> Int64? { let resources = PHAssetResource.assetResources(for: asset) let resource = resources.first let size = resource?.value(forKey: "fileSize") as? Int64 return size }
Replies
1
Boosts
0
Views
154
Activity
Mar ’26
Unexpected Ambisonics format
When trying to load an ambisonics file using this project: https://github.com/robertncoomber/NativeiOSAmbisonicPlayback/ I get "Unexpected Ambisonics format". Interestingly, loading a 3rd order ambisonics file works fine: let ambisonicLayoutTag = kAudioChannelLayoutTag_HOA_ACN_SN3D | 16 let AmbisonicLayout = AVAudioChannelLayout(layoutTag: ambisonicLayoutTag) let StereoLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Stereo) So it's purely related to the kAudioChannelLayoutTag_Ambisonic_B_Format
Replies
0
Boosts
0
Views
53
Activity
Mar ’26
AU MIDI Plugin UI not showing
Hello, I am having an issue with a very small percentage of our users not being able to view the UI of our MIDI Plugin Chord Prism. I have looked this up and seen to where it has been resolved within Logic for AU Instrument and Effect plugins by switching out of "Controls" view, but my situation is different and there is no option on what is displayed to switch out of "Controls" view. Is this something that can be fixed by adjusting settings within Logic?
Replies
0
Boosts
0
Views
156
Activity
Mar ’26
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When trying to access a job's resource, the resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
Replies
1
Boosts
0
Views
278
Activity
Mar ’26
Does PhotoKit provide access to People, Places, and Shared Albums?
I know how to search for Smart Albums (favorites, selfies, etc...) containing photos: // Get smart albums PHFetchResult *smartAlbums = [PHAssetCollection fetchAssetCollectionsWithType:PHAssetCollectionTypeSmartAlbum subtype:PHAssetCollectionSubtypeAlbumRegular options:nil]; I have the following questions: Is there a way to enumerate the People Smart Albums and access the photos in a specific People Smart Album? Is there a way to enumerate the Places Smart Albums and access the photos in a specific Place Smart Album? Is there a way to enumerate Shared Albums (shared to the current iCloud user) and access the photos in a specific Shared Album?
Replies
1
Boosts
2
Views
771
Activity
Mar ’26
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When in this function, I am trying to access a job's resource. The resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
Replies
0
Boosts
0
Views
140
Activity
Mar ’26
UVC over MFi – Is there official support? Implementation guidance?
Hello everyone, I’m looking for more detailed information regarding UVC (USB Video Class) over MFi within the Apple ecosystem and would appreciate some clarification. I’m interested in developing (or interfacing with) an accessory that transmits video over USB using the UVC standard, and I’d like to better understand how this works within the MFi (Made for iPhone) program. Here are my main questions: 1. Do iOS devices provide native support for UVC over USB-C or Lightning within the MFi framework? 2. Are there any specific firmware or authentication requirements when the accessory is MFi-certified? 3. Does UVC support depend solely on the hardware interface (USB-C vs Lightning), or are there additional software-level requirements? 4. Is there any official documentation outlining the recommended flow for implementing UVC-based video capture accessories on iOS? From what I understand, USB-C iPads appear to offer more direct support for standard UVC devices, but it’s not entirely clear how this integrates with the MFi ecosystem with iOS, especially for commercial product development. If anyone has gone through this process or can point me to relevant technical documentation, I would greatly appreciate the guidance. Thank you!
Replies
2
Boosts
0
Views
372
Activity
Mar ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
Replies
0
Boosts
0
Views
236
Activity
Mar ’26
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
Replies
1
Boosts
0
Views
318
Activity
Mar ’26
Audio System Trace: Zero Time Stamp
In Instruments, I'm seeing "Zero Time Stamp" events in the "Audio Server" lane. What does that mean?
Replies
1
Boosts
0
Views
183
Activity
Mar ’26
Final Cut Pro not loading my plugin
I have exhausted standard debugging approaches and need guidance on Final Cut Pro's AU plugin loading behavior. PLUGIN OVERVIEW My plugin is an AUv2 audio plugin built with JUCE 8. The plugin loads and functions correctly in: Pro Tools (AAX) Media Composer (AAX) Reaper (AU + VST3) Logic Pro (AU) GarageBand (AU) Audacity (AU) DaVinci Resolve / Fairlight (AU) Harrison Mixbus (AU) Ableton Live (AU) Cubase (VST3) Nuendo (VST3) It does NOT load in Final Cut Pro (tested on 10.8.x, macOS 14.6 and 15.2). DIAGNOSTICS COMPLETED auval passes cleanly: auval -v aufx Hwhy Manu → AU VALIDATION SUCCEEDED Plugin is notarized and stapled: xcrun stapler validate → The validate action worked spctl --assess --type exec → Note: returns 'not an app' which we understand is expected for .component bundles. Hardened Runtime is enabled on the bundle. We identified that our Info.plist contained a 'resourceUsage' dictionary in the AudioComponents entry. We found via system logs that this was setting au_componentFlags = 2 (kAudioComponentFlag_SandboxSafe), causing FCP to attempt loading the plugin in-process inside its sandbox, where our UDP networking is denied. We removed the resourceUsage dict, confirmed au_componentFlags = 0 in the CAReportingClient log, and FCP now loads the plugin out-of-process via AUHostingServiceXPC_arrow. Despite au_componentFlags = 0 and out-of-process loading confirmed, the plugin still does not appear in FCP's effects browser. We also identified and fixed a channel layout issue — our isBusesLayoutSupported was not returning true for 5-channel layouts (which FCP uses internally). This is now fixed. AU cache has been fully cleared: ~/Library/Caches/com.apple.audio.AudioComponentRegistrar /Library/Caches/com.apple.audio.AudioComponentRegistrar coreaudiod and AudioComponentRegistrar killed to force rescan auval -a run to force re-registration
Replies
0
Boosts
0
Views
249
Activity
Mar ’26
AVSpeechSynthesizer read Mandarin as Cantonese(iOS 26 beta 3))
In iOS 26, AVSpeechSynthesizer read Mandarin into Cantonese pronunciation. No matter how you set the language, and change the settings of my phone system, it doesn't work. let utterance = AVSpeechUtterance(string: "你好啊") //let voice = AVSpeechSynthesisVoice(language: "zh-CN") // not work let voice = AVSpeechSynthesisVoice(language: "zh-Hans") // not work too utterance.voice = voice et synth = AVSpeechSynthesizer() synth.speak(utterance)
Replies
3
Boosts
0
Views
629
Activity
Mar ’26
Understanding CMIO Extension
Hello, I am getting the following errors when building a Mac Camera Extension with web sockets. I am using URLSessionWebsocketTask as my web socket library. I built a test program for my code and in there I can see my web sockets are working properly, but when I run it from the System Extension I get the following errors. The socket opens for two - three messages then crashes. I couldnt find any documentation online for the following errors CMIOExtensionProvider.m:1975:-[CMIOExtensionProvider removeProviderContext:]_block_invoke Unregistered provider context <CMIOExtensionProviderContext: ->, don't be surprised if things go badly CMIOExtensionProviderContext.m:64:-[CMIOExtensionProviderContext initWithConnection:]_block_invoke [391] received Connection invalid``
Replies
7
Boosts
0
Views
2.4k
Activity
Mar ’26
Virtual Camera Shows Jittering Frames and Solid Accent Color on macOS
Hello Apple Developer Support, I’m developing a virtual camera using the CMIOExtensionDevice / CMIOExtensionStreamSource APIs on macOS. While the virtual camera appears in system settings and apps like Zoom and Google Meet, the video output exhibits the following issues: Jittering frames: The first frame sometimes appears correctly, but subsequent frames flicker or jitter. Solid color fill: Eventually, the camera feed fills entirely with a solid accent color (e.g., blue), rather than the intended video content. Console logs: Repeated messages appear in Console.app: Invalid display 0x00000000 Setup details: The virtual camera is created using CMIOExtensionDevice and CMIOExtensionStream. Video frames are rendered from NSImage/CGImage using CGContext and copied into CVPixelBuffers. Frame delivery is controlled by a DispatchSourceTimer at 60 FPS. macOS version: 26.2 Xcode version: 26.1 Observations: The Invalid display 0x00000000 logs suggest that CGContext drawing or NSImage operations are failing in headless mode (i.e., there is no real display attached to the virtual camera). Using CIContext with .useSoftwareRenderer = true appears to mitigate some flicker, but not entirely. Questions / Requests: Is it expected that CoreMediaIO virtual cameras cannot reliably render CGImage / NSImage frames offscreen? Are there recommended APIs or approaches to render virtual camera frames fully headless to avoid display-dependent jitter? Is there any documentation or sample code from Apple showing stable video output from a virtual camera extension that does not rely on a physical display? Any guidance or examples would be greatly appreciated. This issue prevents the virtual camera from being used reliably in standard video apps. Thank you, Savvy
Replies
1
Boosts
0
Views
115
Activity
Mar ’26