Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
300
May ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
1
0
123
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
164
May ’25
Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
Starting in visionOS 26, users can snap windows to surfaces. These windows are locked in place and are later restored by visionOS. We can access the snapped data with surfaceSnappingInfo docs. Users can also lock a free-floating (unsnapped) window from a context menu in the window controls. Is there a way to tell when a window has been locked without being snapped to a surface?
7
3
220
Jun ’25
What is the first reliable position of the apple vision pro device?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames. See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent. a. what is the most reliable way to get the devices's position? b. is this indeed a bug? c. are we using worldInfo improperly? d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently? import ARKit import Combine import OSLog import SwiftUI import RealityKit import RealityKitContent let SUBSYSTEM = Bundle.main.bundleIdentifier! struct ImmersiveView: View { let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView") let session = ARKitSession() let worldInfo = WorldTrackingProvider() @State var sceneUpdateSubscription: EventSubscription? = nil @State var deviceTransform: simd_float4x4? = nil @State var numberOfBadWorldInfos = 0 @State var isBadWorldInfoLoged = false var body: some View { RealityView { content in try? await session.run([worldInfo]) sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return } // `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown) // - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device deviceTransform = pose.originFromAnchorTransform if deviceTransform!.columns.3.y < 1.6 { numberOfBadWorldInfos += 1 logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } else { if !isBadWorldInfoLoged { logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } isBadWorldInfoLoged = true // stop logging. } } } } }
3
0
163
May ’25
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
3
0
222
Jun ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
203
Jun ’25
Projecting a Cube with a Number in ARKit
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below. import SwiftUI import RealityKit import ARKit struct ContentView : View { var body: some View { return ARViewContainer() } } #Preview { ContentView() } struct ARViewContainer: UIViewRepresentable { typealias UIViewType = ARView func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.enableTapGesture() return arView } func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) { } } extension ARView { func enableTapGesture() { let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))) self.addGestureRecognizer(tapGestureRecognizer) } @objc func handleTap(recognizer: UITapGestureRecognizer) { let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)") guard let rayResult = self.ray(through: tapLocation) else { return } let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any) if let firstResult = results.first { let position = simd_make_float3(firstResult.worldTransform.columns.3) placeObject(at: position) } } func placeObject(at position: SIMD3<Float>) { let mesh = MeshResource.generateBox(size: 0.3) let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true) let modelEntity = ModelEntity(mesh: mesh, materials: [material]) var unlitMaterial = UnlitMaterial() if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) { unlitMaterial.color = .init(tint: .white, texture: .init(textureResource)) modelEntity.model?.materials = [unlitMaterial] let id = UUID().uuidString modelEntity.name = id modelEntity.transform.scale = [0.3, 0.1, 0.3] modelEntity.generateCollisionShapes(recursive: true) let anchorEntity = AnchorEntity(world: position) anchorEntity.addChild(modelEntity) self.scene.addAnchor(anchorEntity) } } func generateTextResource(text: String, textColor: UIColor) -> TextureResource? { if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage { let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil)) return textureResource } return nil } } I tap the floor and get a cube with '1' as shown below. The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
4
0
198
Jul ’25
Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
2
1
169
Apr ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
199
Jun ’25
Imitating a grip on an object
I'm playing about with the hand tracking systems in reality kit / Vision Pro I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter) I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand. The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand. It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position. I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand. It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here? Note: HandTrackingSystem.handTracking is a HandTrackingProvider() guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else { return } if let thumb = anchors.handSkeleton?.joint(.thumbTip), let little = anchors.handSkeleton?.joint(.littleFingerTip) { let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3) let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3) let midPos = (thumbPos + littlePos) / 2 let direction = normalize(littlePos - thumbPos) let rotation = simd_quatf(from: [0, 1, 0], to: direction) wandEntity.transform.translation = midPos wandEntity.transform.rotation = rotation content.add(wandEntity) }
1
0
579
Jul ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
166
May ’25
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
92
Jul ’25
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
156
Jun ’25
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
2
1
275
Jul ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
Replies
0
Boosts
0
Views
300
Activity
May ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
Replies
1
Boosts
0
Views
123
Activity
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
Replies
0
Boosts
0
Views
164
Activity
May ’25
Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
Starting in visionOS 26, users can snap windows to surfaces. These windows are locked in place and are later restored by visionOS. We can access the snapped data with surfaceSnappingInfo docs. Users can also lock a free-floating (unsnapped) window from a context menu in the window controls. Is there a way to tell when a window has been locked without being snapped to a surface?
Replies
7
Boosts
3
Views
220
Activity
Jun ’25
How to export an entity to USD?
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
Replies
1
Boosts
0
Views
79
Activity
Apr ’25
MeshInstancesComponent VisionOS 26 Beta 2
Has anyone had success with MeshInstancesComponent? I tried to follow the sample code from What's New in RealityKit but it wouldn't compile. I was able to use one of the init overloads to get it to compile, but using it crashes both my device and the simulator. Even with one instance.
Replies
1
Boosts
0
Views
162
Activity
Jun ’25
What is the first reliable position of the apple vision pro device?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames. See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent. a. what is the most reliable way to get the devices's position? b. is this indeed a bug? c. are we using worldInfo improperly? d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently? import ARKit import Combine import OSLog import SwiftUI import RealityKit import RealityKitContent let SUBSYSTEM = Bundle.main.bundleIdentifier! struct ImmersiveView: View { let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView") let session = ARKitSession() let worldInfo = WorldTrackingProvider() @State var sceneUpdateSubscription: EventSubscription? = nil @State var deviceTransform: simd_float4x4? = nil @State var numberOfBadWorldInfos = 0 @State var isBadWorldInfoLoged = false var body: some View { RealityView { content in try? await session.run([worldInfo]) sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return } // `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown) // - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device deviceTransform = pose.originFromAnchorTransform if deviceTransform!.columns.3.y < 1.6 { numberOfBadWorldInfos += 1 logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } else { if !isBadWorldInfoLoged { logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } isBadWorldInfoLoged = true // stop logging. } } } } }
Replies
3
Boosts
0
Views
163
Activity
May ’25
Object Tracking in RealityView for iOS
Hi, we've been through the Explore Object Tracking for visionOS and worked through the sample code ExploringObjectTrackingWithARKit. What we'd really like to see is Object Tracking for iOS using devices with either LiDAR or the TrueDepth/RGB cameras.
Replies
2
Boosts
1
Views
256
Activity
Jun ’25
Im trying to use videoMaterial on a obj in Xcode 16, I'm lost.
I believe I have created a videoMaterial and assigned it to a mesh with code I found in the Developer's Documentation but Im getting this error. "Trailing closure passed to parameter of type 'String' that does not accept a closure" I have attached a photo of the code and where the error happens. Any help will greatly be appreciated.
Replies
0
Boosts
0
Views
128
Activity
May ’25
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
Replies
3
Boosts
0
Views
222
Activity
Jun ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
Replies
0
Boosts
0
Views
203
Activity
Jun ’25
Projecting a Cube with a Number in ARKit
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below. import SwiftUI import RealityKit import ARKit struct ContentView : View { var body: some View { return ARViewContainer() } } #Preview { ContentView() } struct ARViewContainer: UIViewRepresentable { typealias UIViewType = ARView func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.enableTapGesture() return arView } func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) { } } extension ARView { func enableTapGesture() { let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))) self.addGestureRecognizer(tapGestureRecognizer) } @objc func handleTap(recognizer: UITapGestureRecognizer) { let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)") guard let rayResult = self.ray(through: tapLocation) else { return } let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any) if let firstResult = results.first { let position = simd_make_float3(firstResult.worldTransform.columns.3) placeObject(at: position) } } func placeObject(at position: SIMD3<Float>) { let mesh = MeshResource.generateBox(size: 0.3) let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true) let modelEntity = ModelEntity(mesh: mesh, materials: [material]) var unlitMaterial = UnlitMaterial() if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) { unlitMaterial.color = .init(tint: .white, texture: .init(textureResource)) modelEntity.model?.materials = [unlitMaterial] let id = UUID().uuidString modelEntity.name = id modelEntity.transform.scale = [0.3, 0.1, 0.3] modelEntity.generateCollisionShapes(recursive: true) let anchorEntity = AnchorEntity(world: position) anchorEntity.addChild(modelEntity) self.scene.addAnchor(anchorEntity) } } func generateTextResource(text: String, textColor: UIColor) -> TextureResource? { if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage { let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil)) return textureResource } return nil } } I tap the floor and get a cube with '1' as shown below. The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
Replies
4
Boosts
0
Views
198
Activity
Jul ’25
Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
Replies
2
Boosts
1
Views
169
Activity
Apr ’25
Reality Composer/Xcode 15
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
Replies
4
Boosts
0
Views
1.1k
Activity
Jul ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
Replies
1
Boosts
0
Views
199
Activity
Jun ’25
Imitating a grip on an object
I'm playing about with the hand tracking systems in reality kit / Vision Pro I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter) I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand. The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand. It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position. I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand. It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here? Note: HandTrackingSystem.handTracking is a HandTrackingProvider() guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else { return } if let thumb = anchors.handSkeleton?.joint(.thumbTip), let little = anchors.handSkeleton?.joint(.littleFingerTip) { let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3) let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3) let midPos = (thumbPos + littlePos) / 2 let direction = normalize(littlePos - thumbPos) let rotation = simd_quatf(from: [0, 1, 0], to: direction) wandEntity.transform.translation = midPos wandEntity.transform.rotation = rotation content.add(wandEntity) }
Replies
1
Boosts
0
Views
579
Activity
Jul ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
Replies
1
Boosts
0
Views
166
Activity
May ’25
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
Replies
1
Boosts
0
Views
92
Activity
Jul ’25
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
Replies
3
Boosts
0
Views
156
Activity
Jun ’25
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
Replies
2
Boosts
1
Views
275
Activity
Jul ’25