Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
990
Jan ’26
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
2
1
1.2k
Mar ’26
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
305
Sep ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
480
Sep ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
148
Jun ’25
Does Apple Spatial Audio Format documentation exist
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on. I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26. Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here. Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.” ”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.” ”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
4
0
729
Sep ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
616
Jul ’25
Displaying multiple immersive movies in spheres in an immersive environment
In visionOS, I'm trying to create an immersive environment which would feature several spheres in which immersive movies are visible. I'm starting from a sample code which creates a sphere, sets an immersive movie as its material, and opens it as an immersive environment. This works fine. But if I create a sphere in an open immersive environment using Reality Composer Pro and sets its material to an immersive movie, I can see the movie on the sphere while I move outside of it but if I try to get inside the sphere, it disappears. What would be the right way of doing this ?
1
1
725
Oct ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
533
Oct ’25
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
1
1
1k
Jan ’26
ImagePresentationComponent .spatialStereoImmersive mode not rendering in WindowGroup context
Platform: visionOS 2.6 Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context. This is what I’m seeing: Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace Mode switching API: All mode transitions work correctly (logs confirm the component updates) Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts. This is where it’s breaking for me: Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change The API calls succeed, state updates correctly, but the immersive content doesn’t render. Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve: Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc. Tapping a spatial photo smoothly transitions it to immersive mode in-place. The immersive content appears to “grow” from the original window position by just changing IPC viewing modes. This proves the functionality should be possible, but I can’t determine the correct configuration. So, my question to is: Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of? Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints? Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances? How do you think Apple’s SG app achieves this functionality? For a little more context: All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive] The spatial photos are valid and work correctly in pure immersive space Mixed immersive space is active when testing window context No errors or warnings in console beyond the successful mode switching logs I’m getting Any insights into the proper configuration for window-hosted immersive content
1
1
205
Aug ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
437
Oct ’25
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
875
Jan ’26
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
9
0
949
Nov ’25
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
804
Nov ’25
Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
5
1
682
Jul ’25
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
1
1
742
Jul ’25
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
Replies
1
Boosts
1
Views
990
Activity
Jan ’26
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
Replies
2
Boosts
1
Views
1.2k
Activity
Mar ’26
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
Replies
1
Boosts
0
Views
305
Activity
Sep ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
Replies
2
Boosts
1
Views
480
Activity
Sep ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
Replies
1
Boosts
1
Views
148
Activity
Jun ’25
Does Apple Spatial Audio Format documentation exist
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on. I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26. Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here. Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.” ”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.” ”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
Replies
4
Boosts
0
Views
729
Activity
Sep ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
Replies
3
Boosts
1
Views
616
Activity
Jul ’25
GestureComponent bug on visionOS
let component = GestureComponent(DragGesture()) iOS: ☑️ visionOS: ❌ This bug from beta to public, please fix it.
Replies
2
Boosts
0
Views
446
Activity
Sep ’25
Object Tracking in RealityView for iOS
Hi, we've been through the Explore Object Tracking for visionOS and worked through the sample code ExploringObjectTrackingWithARKit. What we'd really like to see is Object Tracking for iOS using devices with either LiDAR or the TrueDepth/RGB cameras.
Replies
2
Boosts
1
Views
256
Activity
Jun ’25
Displaying multiple immersive movies in spheres in an immersive environment
In visionOS, I'm trying to create an immersive environment which would feature several spheres in which immersive movies are visible. I'm starting from a sample code which creates a sphere, sets an immersive movie as its material, and opens it as an immersive environment. This works fine. But if I create a sphere in an open immersive environment using Reality Composer Pro and sets its material to an immersive movie, I can see the movie on the sphere while I move outside of it but if I try to get inside the sphere, it disappears. What would be the right way of doing this ?
Replies
1
Boosts
1
Views
725
Activity
Oct ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
Replies
2
Boosts
0
Views
533
Activity
Oct ’25
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
Replies
1
Boosts
1
Views
1k
Activity
Jan ’26
ImagePresentationComponent .spatialStereoImmersive mode not rendering in WindowGroup context
Platform: visionOS 2.6 Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context. This is what I’m seeing: Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace Mode switching API: All mode transitions work correctly (logs confirm the component updates) Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts. This is where it’s breaking for me: Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change The API calls succeed, state updates correctly, but the immersive content doesn’t render. Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve: Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc. Tapping a spatial photo smoothly transitions it to immersive mode in-place. The immersive content appears to “grow” from the original window position by just changing IPC viewing modes. This proves the functionality should be possible, but I can’t determine the correct configuration. So, my question to is: Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of? Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints? Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances? How do you think Apple’s SG app achieves this functionality? For a little more context: All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive] The spatial photos are valid and work correctly in pure immersive space Mixed immersive space is active when testing window context No errors or warnings in console beyond the successful mode switching logs I’m getting Any insights into the proper configuration for window-hosted immersive content
Replies
1
Boosts
1
Views
205
Activity
Aug ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
Replies
5
Boosts
0
Views
437
Activity
Oct ’25
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
Replies
3
Boosts
1
Views
875
Activity
Jan ’26
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
Replies
9
Boosts
0
Views
949
Activity
Nov ’25
Is it possible to play Fisheye VR180 video directly?
Hi I know it's possible to play equirectangular VR180 video either SBS or MV-HEVC. And for fisheye video, the only way I know is to convert it into an AIVU for playback. Is there any way to directly play fisheye video using AVPlayer? Thanks a lot!
Replies
2
Boosts
0
Views
501
Activity
Oct ’25
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
Replies
2
Boosts
1
Views
804
Activity
Nov ’25
Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
Replies
5
Boosts
1
Views
682
Activity
Jul ’25
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
Replies
1
Boosts
1
Views
742
Activity
Jul ’25