Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
3
1
809
Oct ’25
Scene not found after changes in Reality Composer Pro
randomly, the app does not work after small changes in Reality Composer. Small changes like scaling a object a tiny bit. to fix the error, i have to change another element in reality composer and hope for the best. if this does not help, i change (transform) something else, or deactive/activate something to get the project working again. I can't see a pattern why the Reality Composer Project sometimes gets in a state where it does not compile anymore.
3
0
2.0k
Jan ’26
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1.1k
Oct ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
3
0
225
Jun ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
246
Apr ’25
What is the first reliable position of the apple vision pro device?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames. See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent. a. what is the most reliable way to get the devices's position? b. is this indeed a bug? c. are we using worldInfo improperly? d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently? import ARKit import Combine import OSLog import SwiftUI import RealityKit import RealityKitContent let SUBSYSTEM = Bundle.main.bundleIdentifier! struct ImmersiveView: View { let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView") let session = ARKitSession() let worldInfo = WorldTrackingProvider() @State var sceneUpdateSubscription: EventSubscription? = nil @State var deviceTransform: simd_float4x4? = nil @State var numberOfBadWorldInfos = 0 @State var isBadWorldInfoLoged = false var body: some View { RealityView { content in try? await session.run([worldInfo]) sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return } // `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown) // - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device deviceTransform = pose.originFromAnchorTransform if deviceTransform!.columns.3.y < 1.6 { numberOfBadWorldInfos += 1 logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } else { if !isBadWorldInfoLoged { logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } isBadWorldInfoLoged = true // stop logging. } } } } }
3
0
154
May ’25
Human pose detection failing on Vision Pro
Hi! I attempted to run a sample project for detecting human pose in photos, which can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
3
1
207
Mar ’25
Human Body joint tracking in VisionOS
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement. Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS? We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record. Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
3
0
352
Mar ’25
Exporting .reality files from Reality Composer Pro
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience. That works really well. I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook. Am I missing something, or is it no longer possible to export .reality files? Thanks.
3
2
2.1k
Jul ’25
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
859
Jan ’26
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
150
Jun ’25
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
647
Jul ’25
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
425
Mar ’25
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
140
Sep ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
116
Apr ’25
Spatial Gallery App functionality
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
2
0
198
Jun ’25
.glassEffect() view modifier in visionOS 26 beta 1 generates an error
.glassEffect(.regular, in: .rect(cornerRadius: 24)) error; 'glassEffect(_:in:isEnabled:)' is unavailable in visionOS This is not surprising since visionOS already has a native glass interface that formed a model for the other OS's, but this error will create additional overhead for developers creating multi-platform apps that include visionOS.
2
0
175
Jun ’25
Realitykit asset loading
With Xcode 26, loading ressources with RealityKit is extremely slow. Here my project takes almost 50 seconds to load. I also get multiple Hang detected messages in the console: When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds. I'm using RealityKit asynchronous loading: private static func loadFromRealityComposerPro( named entityName: String, fromSceneNamed sceneName: String ) async -> Entity? { var entity: Entity? do { let scene = try await Entity( named: sceneName, in: visionPetsContentBundle ) entity = scene.findEntity(named: entityName) } catch { print( "Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)" ) } return entity } Anyone having the same problem?
2
0
93
Jun ’25