Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Created

[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
12
6
3.6k
Jun ’24
Tapping once with both hands only works sometimes in visionOS
Hello! I have an iOS app where I am looking into support for visionOS. I have a whole bunch of gestures set up using UIGestureRecognizer and so far most of them work great in visionOS! But I do see something odd that I am not sure can be fixed on my end. I have a UITapGestureRecognizer which is set up with numberOfTouchesRequired = 2 which I am assuming translates in visionOS to when you tap your thumb and index finger on both hands. When I tap with both hands sometimes this tap gesture gets kicked off and other times it doesn't and it says it only received one touch when it should be two. Interestingly, I see this behavior in Apple Maps where tapping once with both hands should zoom out the map, which only works sometimes. Can anyone explain this or am I missing something?
6
0
831
Jul ’24
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
2
1
1.2k
Jul ’24
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
615
Aug ’24
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
875
Oct ’24
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
3
0
1.1k
Oct ’24
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
5
0
440
Feb ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
976
Feb ’25
Reoccurring World Tracking / Scene Exceeded Limit Error
Hi, We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are: CaptureError.worldTrackingFailure CaptureError.exceedSceneSizeLimit What we have observed: Persistent Errors: The errors continue to occur even after initiating new capture sessions. Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits. Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together. Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder. Request: Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use. Here is a generalised version of our capture session initialization code to help diagnose the issue. struct RoomARCaptureView: UIViewRepresentable { typealias Handler = (CapturedRoom, Error?) -> Void @Binding var stop: Bool @Binding var done: Bool let completion: Handler? func makeUIView(context: Self.Context) -> RoomCaptureView { let view = RoomCaptureView(frame: .zero) view.delegate = context.coordinator view.captureSession.run(configuration: .init()) return view } func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) { if stop { // Stop the session only once, multiple times causes issues with the final presentation uiView.captureSession.stop() stop = false done = true } } static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) { uiView.captureSession.stop() } func makeCoordinator() -> ARViewCoordinator { ARViewCoordinator(completion) } @objc(ARViewCoordinator) class ARViewCoordinator: NSObject, RoomCaptureViewDelegate { var completion: Handler? public required init?(coder: NSCoder) {} public func encode(with coder: NSCoder) {} public init(_ completion: Handler?) { super.init() self.completion = completion } public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool { return true } public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) { completion?(processedResult, error) } } } Thank you for your assistance.
6
3
882
Mar ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
4
0
297
Mar ’25
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
140
Apr ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
141
Apr ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
78
Apr ’25
how to show spatial photo on my Application
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro, so, how can I show spatial photo to users, does there any options for this?
Replies
4
Boosts
0
Views
1.6k
Activity
Mar ’24
Curved/panorama window in visionOS 2?
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Replies
5
Boosts
0
Views
1.5k
Activity
Jun ’24
[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
Replies
12
Boosts
6
Views
3.6k
Activity
Jun ’24
Not able to run Spaceship game project
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device [xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
Replies
11
Boosts
3
Views
1.5k
Activity
Jun ’24
Object Tracking with Reality Composer Pro and ARKit support for Unity
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Replies
1
Boosts
1
Views
603
Activity
Jun ’24
How do you incorporate SharePlay into an Immersive scene in VisionOS?
I've got an Immersive scene that I want to be able to bring additional users into via SharePlay where each user would be able to see (and hopefully interact) with the Immersive scene. How does one implement that?
Replies
2
Boosts
1
Views
885
Activity
Jun ’24
Tapping once with both hands only works sometimes in visionOS
Hello! I have an iOS app where I am looking into support for visionOS. I have a whole bunch of gestures set up using UIGestureRecognizer and so far most of them work great in visionOS! But I do see something odd that I am not sure can be fixed on my end. I have a UITapGestureRecognizer which is set up with numberOfTouchesRequired = 2 which I am assuming translates in visionOS to when you tap your thumb and index finger on both hands. When I tap with both hands sometimes this tap gesture gets kicked off and other times it doesn't and it says it only received one touch when it should be two. Interestingly, I see this behavior in Apple Maps where tapping once with both hands should zoom out the map, which only works sometimes. Can anyone explain this or am I missing something?
Replies
6
Boosts
0
Views
831
Activity
Jul ’24
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
Replies
2
Boosts
1
Views
1.2k
Activity
Jul ’24
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
Replies
3
Boosts
1
Views
615
Activity
Aug ’24
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
Replies
3
Boosts
1
Views
875
Activity
Oct ’24
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
Replies
3
Boosts
0
Views
1.1k
Activity
Oct ’24
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
Replies
5
Boosts
0
Views
440
Activity
Feb ’25
How can I achieve this effect using SwiftUI or ShaderGraph?
How do you call the effect where the edges around the central image gradually become transparent? This effect is also seen when viewing immersive mode of spatial photos in Vision Pro. How can I achieve this effect using SwiftUI or ShaderGraph? I want to use this effect when displaying images in my app.
Replies
4
Boosts
0
Views
690
Activity
Feb ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
Replies
2
Boosts
1
Views
976
Activity
Feb ’25
Reoccurring World Tracking / Scene Exceeded Limit Error
Hi, We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are: CaptureError.worldTrackingFailure CaptureError.exceedSceneSizeLimit What we have observed: Persistent Errors: The errors continue to occur even after initiating new capture sessions. Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits. Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together. Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder. Request: Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use. Here is a generalised version of our capture session initialization code to help diagnose the issue. struct RoomARCaptureView: UIViewRepresentable { typealias Handler = (CapturedRoom, Error?) -> Void @Binding var stop: Bool @Binding var done: Bool let completion: Handler? func makeUIView(context: Self.Context) -> RoomCaptureView { let view = RoomCaptureView(frame: .zero) view.delegate = context.coordinator view.captureSession.run(configuration: .init()) return view } func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) { if stop { // Stop the session only once, multiple times causes issues with the final presentation uiView.captureSession.stop() stop = false done = true } } static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) { uiView.captureSession.stop() } func makeCoordinator() -> ARViewCoordinator { ARViewCoordinator(completion) } @objc(ARViewCoordinator) class ARViewCoordinator: NSObject, RoomCaptureViewDelegate { var completion: Handler? public required init?(coder: NSCoder) {} public func encode(with coder: NSCoder) {} public init(_ completion: Handler?) { super.init() self.completion = completion } public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool { return true } public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) { completion?(processedResult, error) } } } Thank you for your assistance.
Replies
6
Boosts
3
Views
882
Activity
Mar ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
Replies
4
Boosts
0
Views
297
Activity
Mar ’25
How to export an entity to USD?
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
Replies
1
Boosts
0
Views
79
Activity
Apr ’25
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
Replies
2
Boosts
0
Views
140
Activity
Apr ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
Replies
0
Boosts
0
Views
141
Activity
Apr ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
Replies
1
Boosts
0
Views
78
Activity
Apr ’25