Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
528
Jan ’26
ARKit Face Tracking works in total darkness?
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true? I'm using an iPhone 16 to test, and the Face ID works well in darkness.
0
0
97
3w
Crash when Displaying RealityView on Multiple Screen only Connecting with Xcode
I have an iOS app that uses RealityView to display some models and interact with them, and the app uses regular iOS app navigations, then a challenge I'm facing is how to maintain multiple RealityView on multiplescreens. For example Screen A has a RealityView, and then I navigate to Screen B (also has a RealityView) using stack based navigation, when I do so I got a crash -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(fsRealityPbr): argument envProbeTable[0] from Buffer(7) with offset(0) and length(16) has space for 16 bytes, but argument has a length(864). Fragment Function(fsRealityPbr): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 20 (expect MTLTextureTypeCubeArray) for envProbeDiffuseArray[0]. Interestingly this crash only happens when debugging with Xcode, not happens when the app runs on its own. I'm not sure what I'm doing is anti-pattern or it's some Xcode debugging limitation.
0
0
713
1w
VisionOS 2.0 Main Camera Access Enterprise Entitlement Not Recognized in XCode
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab. I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
6
0
1.6k
Sep ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
547
Mar ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
441
Mar ’25
Merge MeshAnchor from Scene Reconstruction for Vision Pro
Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material. func run(_ sceneRec: SceneReconstructionProvider) async { for await update in sceneRec.anchorUpdates { switch update.event { case .added, .updated: // Get or create entity for this anchor let anchorEntity = anchors[update.anchor.id] ?? { let entity = ModelEntity() root?.addChild(entity) anchors[update.anchor.id] = entity return entity }() // Remove any existing children for child in anchorEntity.children { child.removeFromParent() } // Generate the mesh from the anchor guard let mesh = try? await MeshResource(from: update.anchor) else { return } guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue } print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)") // Get the material to use var material: RealityKit.Material if isMaterialLoaded, let loadedMaterial = self.shaderMaterial { material = loadedMaterial } else { // Use a temporary material until the shader loads var tempMaterial = UnlitMaterial() tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5)) material = tempMaterial } await MainActor.run { anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil) // Add collision component with static flag - required for spatial interactions anchorEntity.components.set(CollisionComponent( shapes: [shape], isStatic: true, filter: .default )) // Make entity interactive - enables spatial taps, drags, etc. anchorEntity.components.set(InputTargetComponent()) let shadowComponent = GroundingShadowComponent( castsShadow: true, receivesShadow: true ) anchorEntity.components.set(shadowComponent) } I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh. SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in let tappedEntity = value.entity // Check if the tapped entity is a child of tracking.meshAnchors if isChildOfMeshAnchors(entity: tappedEntity) { // Get local position (in the entity's coordinate space) let localPosition = value.location3D // Convert to world position (scene coordinate space) let worldPosition = value.convert(localPosition, from: .local, to: .scene) print("Tapped mesh anchor at local position: \(localPosition)") print("Tapped mesh anchor at world position: \(worldPosition)") // Update the material parameter with the tap position updateMaterialTapPosition(entity: tappedEntity, position: worldPosition) } else { print("Tapped entity is not a mesh anchor") } } } My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
3
0
375
Mar ’25
Human Body joint tracking in VisionOS
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement. Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS? We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record. Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
3
0
350
Mar ’25
Person Occlusion on the Vision Pro
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
2
0
144
Mar ’25
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
3
0
205
Apr ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
144
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
171
May ’25