Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: <dict> <key>com.apple.security.application-groups</key> <array> <string>group.com.<company>.<appname></string> </array> </dict>
0
0
120
Apr ’25
Why does AVAudioRecorder show 8 kHz when iPhone hardware is 48 kHz?
Hi everyone, I’m testing audio recording on an iPhone 15 Plus using AVFoundation. Here’s a simplified version of my setup: let settings: [String: Any] = [ AVFormatIDKey: Int(kAudioFormatLinearPCM), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false ] audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings) audioRecorder?.record() When I check the recorded file’s sample rate, it logs: Actual sample rate: 8000.0 However, when I inspect the hardware sample rate: try session.setCategory(.playAndRecord, mode: .default) try session.setActive(true) print("Hardware sample rate:", session.sampleRate) I consistently get: `Hardware sample rate: 48000.0 My questions are: Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally? Is there any way to force the hardware to record natively at 8 kHz? If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices? Thanks in advance for your guidance!
1
0
277
Sep ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
2
0
574
Sep ’25
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
3
0
585
Jan ’26
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
631
Sep ’25
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
0
1
167
Jun ’25
Windows Apple Music: how to enumerate the local library or export it? Is Library.musicdb documented / API available?
Environment Windows 11 [edition/build]: [e.g., 23H2, 22631.x] Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store] Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary Summary I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent. What I’m seeing in the library bundle Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma"). Library Preferences.musicdb → also starts with "hfma". artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths). Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations. Genius.itdb → not a SQLite database on this machine. What I’ve tried Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.” Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs. The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows. What I’m trying to accomplish Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library. Questions for Apple Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows? Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)? Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations? If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)? Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library? Why this matters For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows. Reference details (in case it helps triage) Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…). artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork). I can supply a minimal repro tool and logs if that’s helpful. Feature request (if no current API) Add an official Export Library/Playlists action on Windows Apple Music, or Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library. Thanks in advance for any guidance or pointers to docs I might have missed.
0
0
357
Sep ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
66
Apr ’25
AirPods with H2 and studio-quality recording - how to replicate Camera video capture
Using an iPhone Pro 12 running iOS 26.0.1, with AirPods Pro 3. Camera app does capture video with what seems to be "Studio Quality Recording". Am trying to replicate that SQR with my own Camera like app, and while I can pull audio in from the APP3 mic, and my video capture app is recording a 48,000Hz high-bitrate video, the audio still sounds non-SQR. I'm seeing bluetoothA2DP , bluetoothLE , bluetoothHFP as portType, and not sure if SQR depends on one of those? Is there sample code demonstrating a SQR capture? Nevermind video and camera, just audio even? Also, I don't understand what SQR is doing between the APP3 and the iPhone. What codec is that? What bitrate is that? If I capture video using Capture and inspect the audio stream I see mono 74.14 kbit/s MPEG-4 AAC, 48000 Hz. But I assume that's been recompressed and not really giving me any insight into the APP3 H2 transmission?
1
0
173
Oct ’25
macOS Tahoe: Can't setup AVAudioEngine with playthrough
Hi, I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough. As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))] Any ideas how to fix it? // Input-Device setzen try? setupInputDevice(deviceID: inputDevice) let input = audioEngine.inputNode // Stereo-Format erzwingen let inputHWFormat = input.inputFormat(forBus: 0) let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved) guard let format = stereoFormat else { throw AudioError.deviceSetupFailed(-1) } print("Input format: \(inputHWFormat)") print("Forced stereo format: \(format)") audioEngine.attach(monitorMixer) audioEngine.connect(input, to: monitorMixer, format: format) // MonitorMixer -> MainMixer (Output) // Problem here, format: format also breaks. audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
0
0
212
Oct ’25
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
1
0
574
3w
APNs
{ "aps": { "content-available": 1 }, "audio_file_name": "ding.caf", "audio_url": "https://example.com/audio.mp3" } When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
1
0
239
Oct ’25
Creating RTP-MIDI Sessions via MIDINetworkSession C API (dlopen/dlsym) on macOS 15?
I’m an amateur developer working on a free utility for composers/producers, for which the macOS release needs to create and name RTP-MIDI sessions in Audio MIDI Setup from the command line (so I can ship a small C helper instead of telling users to click through the UI). Here’s what I’ve tried so far, without luck: • Plist hacks: Injecting entries into ~/Library/Audio/MIDI Configurations/*.mcfg works when AMS is closed, but AMS immediately locks and reverts my changes when it’s open. • CoreMIDI C API: I can create virtual ports with MIDISourceCreate, but attempting MIDIObjectGetDataProperty on the apple.midirtp.session plugin always returns err –10836. • Obj-C & Swift: Loading MIDINetworkSession and calling defaultSession, init, setNetworkName: and setting enabled = YES doesn’t produce a new session object in the Network panel. • dlopen/dlsym: I extracted the real CoreMIDI binary out of the dyld shared cache and tried binding _MIDINetworkSessionCreate, _SetName, _SetEnabled, etc., but all the symbols come back null or my tool segfaults. • Plugin registration: I’ve pulled the factory UUID (70C9C5EA-7C65-11D8-B317-000393A34B5A) from /System/Library/Extensions/AppleMIDIRTPDriver.plugin/Contents/Info.plist and called CFPlugInRegisterFactories, but it still never exposes the session-creation calls. At this point I’m convinced I’m either loading the wrong binary or missing one critical step in registering the RTP-MIDI plugin’s private API. Can anyone point me to: The exact path of the dylib or bundle that actually exports the MIDINetworkSessionCreate/MIDINetworkSessionSetName/MIDINetworkSessionSetEnabled symbols? A minimal working snippet (C or Obj-C) that reliably creates and names a Network-MIDI session? Any pointers, sample code, or even ideas about where Apple hides this functionality on macOS 15 would be hugely appreciated. Thanks!
0
1
238
Jun ’25
PushToTalk
Using the PushToTalk library, call requestBeginTransmitting (channelUUID: UUID) on a Bluetooth device and then use the PTChannelManagerial Delegate proxy method channelManager:(PTChannelManager *)channelManager didActivateAudioSession:(AVAudioSession *)audioSession Start recording sound inside. Completed recording
11
0
1.2k
Oct ’25
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
2
0
343
Jun ’25
Displaying and working with Favorites in iOS app
New to iOS development and I've been trying to make heads or tails of the documentation. I know there is a difference between the data fields returned from songs from the user library and from the category, but whenever I search on the apple site I can't find a list of each. For example, Im trying to get the releaseDate of a song in my library, but it seems I'll have to cross-query either the catalog entry for the using song.catalogID or the song.irsc but when I try to use them I can't find a cross reference between the two. I'm totally turned around. Also trying to determine if a song in my library has been favorited or not? isFavorited (or something similar) doesn't seem to be a thing. Using this code and trying to find a way to display a solid star if the song has been favorited or an empty one if it's not. Seems like a basic request but I can't find anything on how to do it. I've searched docs, googled, tried. Does apple want us to query the user's Favorited Songs playlist or something? How do I know which playlist that is? I know isFavorited isn't a thing, just using it here so you can see what my intension is: HStack(spacing: 10) { Image(systemName: song.isFavorited ? "star.fill" : "star") .foregroundColor(song.isFavorited ? .yellow : .gray) Image(systemName: "magnifyingglass") }
1
0
232
Oct ’25
CoreMIDI driver - flow control
Hi, when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system. What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data. It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work? Thanks, any hints or pointers are highly appreciated! Hagen.
0
0
267
Oct ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
Replies
0
Boosts
0
Views
120
Activity
Apr ’25
Why does AVAudioRecorder show 8 kHz when iPhone hardware is 48 kHz?
Hi everyone, I’m testing audio recording on an iPhone 15 Plus using AVFoundation. Here’s a simplified version of my setup: let settings: [String: Any] = [ AVFormatIDKey: Int(kAudioFormatLinearPCM), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false ] audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings) audioRecorder?.record() When I check the recorded file’s sample rate, it logs: Actual sample rate: 8000.0 However, when I inspect the hardware sample rate: try session.setCategory(.playAndRecord, mode: .default) try session.setActive(true) print("Hardware sample rate:", session.sampleRate) I consistently get: `Hardware sample rate: 48000.0 My questions are: Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally? Is there any way to force the hardware to record natively at 8 kHz? If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices? Thanks in advance for your guidance!
Replies
1
Boosts
0
Views
277
Activity
Sep ’25
Ability to programatically download Premium and Enhanced voices
Please consider adding the ability to programatically download Premium and Enhanced voices. At the moment it is extremely inconvenient for our users, as they have to navigate to settings themselves to download voices. Our app relies heavily on SpeechSynthesis integration, and it would greatly benefit from this feature. FB16307193
Replies
0
Boosts
1
Views
86
Activity
Jun ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
Replies
2
Boosts
0
Views
574
Activity
Sep ’25
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
Replies
3
Boosts
0
Views
585
Activity
Jan ’26
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
Replies
2
Boosts
0
Views
631
Activity
Sep ’25
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
Replies
0
Boosts
1
Views
167
Activity
Jun ’25
Windows Apple Music: how to enumerate the local library or export it? Is Library.musicdb documented / API available?
Environment Windows 11 [edition/build]: [e.g., 23H2, 22631.x] Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store] Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary Summary I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent. What I’m seeing in the library bundle Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma"). Library Preferences.musicdb → also starts with "hfma". artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths). Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations. Genius.itdb → not a SQLite database on this machine. What I’ve tried Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.” Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs. The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows. What I’m trying to accomplish Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library. Questions for Apple Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows? Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)? Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations? If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)? Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library? Why this matters For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows. Reference details (in case it helps triage) Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…). artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork). I can supply a minimal repro tool and logs if that’s helpful. Feature request (if no current API) Add an official Export Library/Playlists action on Windows Apple Music, or Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library. Thanks in advance for any guidance or pointers to docs I might have missed.
Replies
0
Boosts
0
Views
357
Activity
Sep ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
Replies
0
Boosts
0
Views
66
Activity
Apr ’25
AirPods with H2 and studio-quality recording - how to replicate Camera video capture
Using an iPhone Pro 12 running iOS 26.0.1, with AirPods Pro 3. Camera app does capture video with what seems to be "Studio Quality Recording". Am trying to replicate that SQR with my own Camera like app, and while I can pull audio in from the APP3 mic, and my video capture app is recording a 48,000Hz high-bitrate video, the audio still sounds non-SQR. I'm seeing bluetoothA2DP , bluetoothLE , bluetoothHFP as portType, and not sure if SQR depends on one of those? Is there sample code demonstrating a SQR capture? Nevermind video and camera, just audio even? Also, I don't understand what SQR is doing between the APP3 and the iPhone. What codec is that? What bitrate is that? If I capture video using Capture and inspect the audio stream I see mono 74.14 kbit/s MPEG-4 AAC, 48000 Hz. But I assume that's been recompressed and not really giving me any insight into the APP3 H2 transmission?
Replies
1
Boosts
0
Views
173
Activity
Oct ’25
ASAF Panner Pro Tools Plug In
A recent WWDC session "Learn about Apple Immersive Video technologies" showed a Apple Spatial Audio Format Panner plugin for Pro Tools. The presenter stated that it's available on a per-user license. Where can users access this?
Replies
1
Boosts
3
Views
405
Activity
Jun ’25
macOS Tahoe: Can't setup AVAudioEngine with playthrough
Hi, I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough. As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))] Any ideas how to fix it? // Input-Device setzen try? setupInputDevice(deviceID: inputDevice) let input = audioEngine.inputNode // Stereo-Format erzwingen let inputHWFormat = input.inputFormat(forBus: 0) let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved) guard let format = stereoFormat else { throw AudioError.deviceSetupFailed(-1) } print("Input format: \(inputHWFormat)") print("Forced stereo format: \(format)") audioEngine.attach(monitorMixer) audioEngine.connect(input, to: monitorMixer, format: format) // MonitorMixer -> MainMixer (Output) // Problem here, format: format also breaks. audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
Replies
0
Boosts
0
Views
212
Activity
Oct ’25
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
Replies
1
Boosts
0
Views
574
Activity
3w
APNs
{ "aps": { "content-available": 1 }, "audio_file_name": "ding.caf", "audio_url": "https://example.com/audio.mp3" } When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
Replies
1
Boosts
0
Views
239
Activity
Oct ’25
Creating RTP-MIDI Sessions via MIDINetworkSession C API (dlopen/dlsym) on macOS 15?
I’m an amateur developer working on a free utility for composers/producers, for which the macOS release needs to create and name RTP-MIDI sessions in Audio MIDI Setup from the command line (so I can ship a small C helper instead of telling users to click through the UI). Here’s what I’ve tried so far, without luck: • Plist hacks: Injecting entries into ~/Library/Audio/MIDI Configurations/*.mcfg works when AMS is closed, but AMS immediately locks and reverts my changes when it’s open. • CoreMIDI C API: I can create virtual ports with MIDISourceCreate, but attempting MIDIObjectGetDataProperty on the apple.midirtp.session plugin always returns err –10836. • Obj-C & Swift: Loading MIDINetworkSession and calling defaultSession, init, setNetworkName: and setting enabled = YES doesn’t produce a new session object in the Network panel. • dlopen/dlsym: I extracted the real CoreMIDI binary out of the dyld shared cache and tried binding _MIDINetworkSessionCreate, _SetName, _SetEnabled, etc., but all the symbols come back null or my tool segfaults. • Plugin registration: I’ve pulled the factory UUID (70C9C5EA-7C65-11D8-B317-000393A34B5A) from /System/Library/Extensions/AppleMIDIRTPDriver.plugin/Contents/Info.plist and called CFPlugInRegisterFactories, but it still never exposes the session-creation calls. At this point I’m convinced I’m either loading the wrong binary or missing one critical step in registering the RTP-MIDI plugin’s private API. Can anyone point me to: The exact path of the dylib or bundle that actually exports the MIDINetworkSessionCreate/MIDINetworkSessionSetName/MIDINetworkSessionSetEnabled symbols? A minimal working snippet (C or Obj-C) that reliably creates and names a Network-MIDI session? Any pointers, sample code, or even ideas about where Apple hides this functionality on macOS 15 would be hugely appreciated. Thanks!
Replies
0
Boosts
1
Views
238
Activity
Jun ’25
PushToTalk
Using the PushToTalk library, call requestBeginTransmitting (channelUUID: UUID) on a Bluetooth device and then use the PTChannelManagerial Delegate proxy method channelManager:(PTChannelManager *)channelManager didActivateAudioSession:(AVAudioSession *)audioSession Start recording sound inside. Completed recording
Replies
11
Boosts
0
Views
1.2k
Activity
Oct ’25
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
Replies
2
Boosts
0
Views
343
Activity
Jun ’25
Displaying and working with Favorites in iOS app
New to iOS development and I've been trying to make heads or tails of the documentation. I know there is a difference between the data fields returned from songs from the user library and from the category, but whenever I search on the apple site I can't find a list of each. For example, Im trying to get the releaseDate of a song in my library, but it seems I'll have to cross-query either the catalog entry for the using song.catalogID or the song.irsc but when I try to use them I can't find a cross reference between the two. I'm totally turned around. Also trying to determine if a song in my library has been favorited or not? isFavorited (or something similar) doesn't seem to be a thing. Using this code and trying to find a way to display a solid star if the song has been favorited or an empty one if it's not. Seems like a basic request but I can't find anything on how to do it. I've searched docs, googled, tried. Does apple want us to query the user's Favorited Songs playlist or something? How do I know which playlist that is? I know isFavorited isn't a thing, just using it here so you can see what my intension is: HStack(spacing: 10) { Image(systemName: song.isFavorited ? "star.fill" : "star") .foregroundColor(song.isFavorited ? .yellow : .gray) Image(systemName: "magnifyingglass") }
Replies
1
Boosts
0
Views
232
Activity
Oct ’25
Audio of AirPods won’t work
Since the last update to IOS 26.0 (23A5276f) the AirPods connect to my IPhone and the Audio is still running through the phone. They are shown in the Bluetooth Icon that they’re paired.
Replies
1
Boosts
0
Views
94
Activity
Jun ’25
CoreMIDI driver - flow control
Hi, when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system. What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data. It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work? Thanks, any hints or pointers are highly appreciated! Hagen.
Replies
0
Boosts
0
Views
267
Activity
Oct ’25