Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

The asset pack with the ID “testVideoAssetPack” couldn’t be looked up: Could not connect to the server.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g) Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs… Loading the asset pack at “test.aar”… Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125… running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!" There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards. { "assetPackID": "testVideoAssetPack", "downloadPolicy": { "prefetch": { "installationEventTypes": ["firstInstallation", "subsequentUpdate"] } }, "fileSelectors": [ { "file": "video/test.mp4" } ], "platforms": [ "iOS" ] } this is my Manifest.json
1
0
404
Jul ’25
Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
Hello, We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block: let requestHandler = ImageRequestHandler(paddedImage) var request = CoreMLRequest(model: model) request.cropAndScaleAction = .scaleToFit let results = try await requestHandler.perform(request) The client using this code is wrapped inside an actor, following Swift concurrency principles. The issue has been consistently reproduced across multiple iPadOS versions, including: iPad OS - 18.4.0 iPad OS - 18.4.1 iPad OS - 18.5.0 This is the crash log - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer 0 libobjc.A.dylib 0x7b98 objc_retain + 16 1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16 2 libobjc.A.dylib 0xbf18 objc_getProperty + 100 3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148 4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748 5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132 6 Vision 0x14600 VNExecuteBlock + 80 7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56 8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240 9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452 12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60 13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324 14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336 15 Vision 0x14600 VNExecuteBlock + 80 16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256 17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284 19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632 20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124 21 Vision 0x14600 VNExecuteBlock + 80 22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340 23 Vision 0x125410 __swift_memcpy112_8 + 4852 24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292 25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156 26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364 27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156 28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232 29 libsystem_pthread.dylib 0xaac start_wqthread + 8 We found an issue similar to us - https://developer.apple.com/forums/thread/770771. But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies. Please let us know if any additional information would help diagnose this issue.
3
0
436
Jul ’25
Vision Framework - Testing RecognizeDocumentsRequest
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM I am running Xcode Beta, however I only have one primary device that I cannot install beta software on. Please provide a strategy for testing. Will simulator work? The new capability is critical to my application, just what I need for structuring document scans and extraction. Thank you.
1
0
323
Jun ’25
How to pass data to FoundationModels with a stable identifier
For example: I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb. I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed. I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model. I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably. Any suggestions? I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
2
0
323
Jun ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
603
Jul ’25
macOS 26 Beta 2 - Foundation Models - Symbol not found
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries. Worked fine on beta 1, on beta 2 there's this error dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Is this a part of an API transition, if so - Apple, please update your documentation
3
0
390
Jun ’25
Memory Attribution for Foundation Models in iOS 26
Hi, I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage. Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system? I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination? Thanks in advance for any clarification. Thanks.
1
0
426
Jul ’25
Compatibility issue of TensorFlow-metal with PyArrow
Overview I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details Environment Details macOS Version: 15.3.2 Mac Model: MacBook Pro Max M3 Python Version: 3.11 TensorFlow Version: 2.19 PyArrow Version: 19.0.0 Issue Description: When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually. Steps to Reproduce Create a new Python environment: conda create -n tf-metal python=3.11 Install TensorFlow-metal: pip install tensorflow tensorflow-metal Install PyArrow: pip install pyarrow Run the following minimal example: # Create a simple model model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(2,)), tf.keras.layers.Dense(1) ]) model.compile(optimizer='adam', loss='mse') model.summary() # This works fine # Generate some dummy data X = np.random.random((100, 2)) y = np.random.random((100, 1)) # The crash happens exactly at this line model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here Result: Kernel crashes with no error message What I've Tried Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline. Questions Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases? Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help! Thank you in advance for your help!
2
0
145
May ’25
Swipe-to-Type Broken in iOS 26 Beta 1 & 2 Siri Typing Mode
I’ve been testing silent Siri engagement via typing on iOS 18 and also on iOS 26 beta 1 and beta 2. While normal typing works perfectly in type-to-Siri mode, I’ve noticed that swipe-to-type gestures don’t work within Siri’s input field. Interestingly, you still feel the usual haptic feedback associated with swipe typing, but no text appears in the Siri text box. Swipe-to-type continues to work flawlessly in other apps like Messages and Notes, so this seems to be an issue specific to Siri’s typing input handler in these betas. Hopefully, it will be fixed in the next release because swipe typing is essential to my silent Siri workflow.
1
0
244
Jun ’25
Foundation model sandbox restriction error
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled): com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1 Are there entitlements / permissions I need to enable in Xcode that I forgot to do? Code example Here's how I'm initializing the language model session: private func setupLanguageModelSession() { if #available(iOS 26.0, *) { let instructions = """ my instructions """ do { languageModelSession = try LanguageModelSession(instructions: instructions) print("Foundation Models language model session initialized") } catch { print("Error creating language model session: \(error)") languageModelSession = nil } } else { print("Device does not support Foundation Models (requires iOS 26.0+)") languageModelSession = nil } }
2
0
252
Jun ’25
Localizing prompts that has string interpolated generable objects
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example: "Given the following workout routine: \(routine), suggest one additional exercise to complement it." In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld. Has anyone found a way to accomplish this?
1
0
391
Jul ’25
Failing to run SystemLanguageModel inference with custom adapter
Hi, I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error: Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest Any ideas / direction? For testing I am including the .fmadapter file inside the app bundle. This is where I load it: @State private var session: LanguageModelSession? // = LanguageModelSession() func loadAdapter() async throws { if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") { print("Asset URL: \(assetURL)") let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL) let adaptedModel = SystemLanguageModel(adapter: adapter) session = LanguageModelSession(model: adaptedModel) print("Loaded adapter and updated session") } else { print("Asset not found in the main bundle.") } } This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error: func sendMessage(_ msg: String) { self.loading = true if let session = session { Task { do { let modelResponse = try await session.respond(to: msg) DispatchQueue.main.async { self.response = modelResponse.content self.loading = false } } catch { print("Error: \(error)") DispatchQueue.main.async { self.loading = false } } } } }
3
0
243
Jun ’25
InferenceError referencing context length in FoundationModels framework
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup: private func cleanupText(_ text: String) async throws -> String? { print("Cleaning up text of length \(text.count)...") let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.") let response = try await session.respond(to: .init(text), generating: String.self) return response.content } The content length is about 29,000 characters. And I get this error: InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.. Is 4096 a reference to a max input length? Or is this a bug? This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
5
0
476
Jul ’25
Is it possible to create a virtual NPU device on macOS using Hypervisor.framework + CoreML?
Is it possible to expose a custom VirtIO device to a Linux guest running inside a VM — likely using QEMU backed by Hypervisor.framework. The guest would see this device as something like /dev/npu0, and it would use a kernel driver + userspace library to submit inference requests. On the macOS host, these requests would be executed using CoreML, MPSGraph, or BNNS. The results would be passed back to the guest via IPC. Does the macOS allow this kind of "fake" NPU / GPU
1
0
448
Aug ’25
Does Apple's new foundation models include a Vision API for accessing on-device LLM capabilities?
I couldn't find information about this in the documentation. Could someone clarify if this API is available and how to access it?
Replies
2
Boosts
0
Views
233
Activity
Jun ’25
The asset pack with the ID “testVideoAssetPack” couldn’t be looked up: Could not connect to the server.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g) Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs… Loading the asset pack at “test.aar”… Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125… running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!" There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards. { "assetPackID": "testVideoAssetPack", "downloadPolicy": { "prefetch": { "installationEventTypes": ["firstInstallation", "subsequentUpdate"] } }, "fileSelectors": [ { "file": "video/test.mp4" } ], "platforms": [ "iOS" ] } this is my Manifest.json
Replies
1
Boosts
0
Views
404
Activity
Jul ’25
Album segmentation model
I have a question. In China, long pressing a picture in the album can segment the target. Is this model a local model? Is there any information? Can developers use it?
Replies
2
Boosts
0
Views
318
Activity
Jul ’25
Cannot find 'SystemLanguageModel' in scope
Hi everyone, I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on. The following code gives the error message in the title: import NaturalLanguage @available(iOS 18.0, *) func testSystemModel() { let model = SystemLanguageModel.default print(model) } What am I missing?
Replies
1
Boosts
0
Views
287
Activity
Jun ’25
Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
Hello, We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block: let requestHandler = ImageRequestHandler(paddedImage) var request = CoreMLRequest(model: model) request.cropAndScaleAction = .scaleToFit let results = try await requestHandler.perform(request) The client using this code is wrapped inside an actor, following Swift concurrency principles. The issue has been consistently reproduced across multiple iPadOS versions, including: iPad OS - 18.4.0 iPad OS - 18.4.1 iPad OS - 18.5.0 This is the crash log - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer 0 libobjc.A.dylib 0x7b98 objc_retain + 16 1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16 2 libobjc.A.dylib 0xbf18 objc_getProperty + 100 3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148 4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748 5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132 6 Vision 0x14600 VNExecuteBlock + 80 7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56 8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240 9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452 12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60 13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324 14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336 15 Vision 0x14600 VNExecuteBlock + 80 16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256 17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284 19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632 20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124 21 Vision 0x14600 VNExecuteBlock + 80 22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340 23 Vision 0x125410 __swift_memcpy112_8 + 4852 24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292 25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156 26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364 27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156 28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232 29 libsystem_pthread.dylib 0xaac start_wqthread + 8 We found an issue similar to us - https://developer.apple.com/forums/thread/770771. But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies. Please let us know if any additional information would help diagnose this issue.
Replies
3
Boosts
0
Views
436
Activity
Jul ’25
Vision Framework - Testing RecognizeDocumentsRequest
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM I am running Xcode Beta, however I only have one primary device that I cannot install beta software on. Please provide a strategy for testing. Will simulator work? The new capability is critical to my application, just what I need for structuring document scans and extraction. Thank you.
Replies
1
Boosts
0
Views
323
Activity
Jun ’25
How to pass data to FoundationModels with a stable identifier
For example: I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb. I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed. I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model. I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably. Any suggestions? I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
Replies
2
Boosts
0
Views
323
Activity
Jun ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
Replies
4
Boosts
0
Views
603
Activity
Jul ’25
macOS 26 Beta 2 - Foundation Models - Symbol not found
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries. Worked fine on beta 1, on beta 2 there's this error dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Is this a part of an API transition, if so - Apple, please update your documentation
Replies
3
Boosts
0
Views
390
Activity
Jun ’25
AI ecosystem
I’m sure someone though about it already. But let’s have ecosystem, where Apple Intelligence uses your most capable (Apple) hardware at first and the cloud service as second.
Replies
1
Boosts
0
Views
224
Activity
May ’25
Memory Attribution for Foundation Models in iOS 26
Hi, I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage. Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system? I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination? Thanks in advance for any clarification. Thanks.
Replies
1
Boosts
0
Views
426
Activity
Jul ’25
Compatibility issue of TensorFlow-metal with PyArrow
Overview I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details Environment Details macOS Version: 15.3.2 Mac Model: MacBook Pro Max M3 Python Version: 3.11 TensorFlow Version: 2.19 PyArrow Version: 19.0.0 Issue Description: When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually. Steps to Reproduce Create a new Python environment: conda create -n tf-metal python=3.11 Install TensorFlow-metal: pip install tensorflow tensorflow-metal Install PyArrow: pip install pyarrow Run the following minimal example: # Create a simple model model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(2,)), tf.keras.layers.Dense(1) ]) model.compile(optimizer='adam', loss='mse') model.summary() # This works fine # Generate some dummy data X = np.random.random((100, 2)) y = np.random.random((100, 1)) # The crash happens exactly at this line model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here Result: Kernel crashes with no error message What I've Tried Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline. Questions Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases? Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help! Thank you in advance for your help!
Replies
2
Boosts
0
Views
145
Activity
May ’25
Swipe-to-Type Broken in iOS 26 Beta 1 & 2 Siri Typing Mode
I’ve been testing silent Siri engagement via typing on iOS 18 and also on iOS 26 beta 1 and beta 2. While normal typing works perfectly in type-to-Siri mode, I’ve noticed that swipe-to-type gestures don’t work within Siri’s input field. Interestingly, you still feel the usual haptic feedback associated with swipe typing, but no text appears in the Siri text box. Swipe-to-type continues to work flawlessly in other apps like Messages and Notes, so this seems to be an issue specific to Siri’s typing input handler in these betas. Hopefully, it will be fixed in the next release because swipe typing is essential to my silent Siri workflow.
Replies
1
Boosts
0
Views
244
Activity
Jun ’25
Foundation model sandbox restriction error
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled): com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1 Are there entitlements / permissions I need to enable in Xcode that I forgot to do? Code example Here's how I'm initializing the language model session: private func setupLanguageModelSession() { if #available(iOS 26.0, *) { let instructions = """ my instructions """ do { languageModelSession = try LanguageModelSession(instructions: instructions) print("Foundation Models language model session initialized") } catch { print("Error creating language model session: \(error)") languageModelSession = nil } } else { print("Device does not support Foundation Models (requires iOS 26.0+)") languageModelSession = nil } }
Replies
2
Boosts
0
Views
252
Activity
Jun ’25
What Should the iOS Deployment Target Be Set to?
Originally, I set my iOS deployment target to 18.1, but now that I'm integrating Foundational Models, I set it to iOS 26.0. Is this ok?
Replies
1
Boosts
0
Views
165
Activity
Feb ’26
Localizing prompts that has string interpolated generable objects
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example: "Given the following workout routine: \(routine), suggest one additional exercise to complement it." In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld. Has anyone found a way to accomplish this?
Replies
1
Boosts
0
Views
391
Activity
Jul ’25
Failing to run SystemLanguageModel inference with custom adapter
Hi, I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error: Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest Any ideas / direction? For testing I am including the .fmadapter file inside the app bundle. This is where I load it: @State private var session: LanguageModelSession? // = LanguageModelSession() func loadAdapter() async throws { if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") { print("Asset URL: \(assetURL)") let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL) let adaptedModel = SystemLanguageModel(adapter: adapter) session = LanguageModelSession(model: adaptedModel) print("Loaded adapter and updated session") } else { print("Asset not found in the main bundle.") } } This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error: func sendMessage(_ msg: String) { self.loading = true if let session = session { Task { do { let modelResponse = try await session.respond(to: msg) DispatchQueue.main.async { self.response = modelResponse.content self.loading = false } } catch { print("Error: \(error)") DispatchQueue.main.async { self.loading = false } } } } }
Replies
3
Boosts
0
Views
243
Activity
Jun ’25
InferenceError referencing context length in FoundationModels framework
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup: private func cleanupText(_ text: String) async throws -> String? { print("Cleaning up text of length \(text.count)...") let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.") let response = try await session.respond(to: .init(text), generating: String.self) return response.content } The content length is about 29,000 characters. And I get this error: InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.. Is 4096 a reference to a max input length? Or is this a bug? This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
Replies
5
Boosts
0
Views
476
Activity
Jul ’25
Is it possible to create a virtual NPU device on macOS using Hypervisor.framework + CoreML?
Is it possible to expose a custom VirtIO device to a Linux guest running inside a VM — likely using QEMU backed by Hypervisor.framework. The guest would see this device as something like /dev/npu0, and it would use a kernel driver + userspace library to submit inference requests. On the macOS host, these requests would be executed using CoreML, MPSGraph, or BNNS. The results would be passed back to the guest via IPC. Does the macOS allow this kind of "fake" NPU / GPU
Replies
1
Boosts
0
Views
448
Activity
Aug ’25
Provide unique identifier for tool calls and responses
Hey, Would be great to have an equivalent of toolCallId for both toolCall and toolResult in the transcript. Otherwise, it is hard to connect tool calls with their respective responses, when there were multiple parallel calls to the same tool. Thanks!
Replies
3
Boosts
0
Views
422
Activity
Jul ’25