Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
help me please i cant remove little messed up pixels and they are getting more and more please help me!!
2025 Macbook pro no protection screen actual screen
When I run my app from Xcode on a device running iOS 26, the roomplan capture is corrupted and the recording is green and purple. This issue does not occur when I use an older version of iOS or when I run the app via testFlight or the App Store.
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
I have been referencing the Object Tracking Tutorial from WWDC 2024 on Vision OS, how Create ML is used to create a reference object, and we can track them in the ARSession.
I am looking forward to building this feature on an AR app for iPhone, I am using iPhone 13 Pro Max. I have created couple of reference objects from the Create ML.
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps.
We did the following:
Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera
I am just unable to receive camera frames.
I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason.
"Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
Hello everyone, I'm a new developer and I'm still learning the foundations of Swift and SwiftUI while building my first app. Today I wanted to ask you how to implement AR Quixck Views inside my app. I wanna be able to dynamically preview AR objects in a dedicated view, however, I don't seem to have understood where and how to locate AR objects inside my project. I tried including them in the Assets folder of the project, or in the Recources folder, or within the main folder of my project alongside the MyAppApp.swift file. None of the methods I used seemed have worked in that none of the objects was ever located. I made sure to specify the path to the files every time, but somehow the location isn't recognized. I also tried giving no path so that the app would search for the files in their default location (which I apparently haven't grasped yet), but still my attempt failed. I don't have the code sample on me at the moment, but I will write a followup comment on this post to show you what I wrote in case anyone was interested in debugging my code. Meanwhile, if anyone would be so kind to point me at the support article or to comment below the sample code they used in their app, I would very much appreciate it, so that I can start debugging. Thank you for reading this, I appreciate you.
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code.
Please see attached.
There are no errors showing up but the videoMaterial is no longer working.
Please see the attached for the working code that plays the videoMaterial.
I am stumped can anyone help me solve this?
Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hi team,
I believe I’ve found a registration issue between ARFrame.sceneDepth and ARFrame.capturedImage when using high-resolution frame capture on a 2022 iPad Pro (6th gen).
When enabling high-resolution capture:
if let highResFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing {
config.videoFormat = highResFormat
}
…
arView.session.captureHighResolutionFrame { ... }
the depth map provided by ARFrame.sceneDepth no longer aligns correctly with the corresponding high-resolution capturedImage.
This misalignment results in consistently over-estimated distance measurements in my app (which relies on mapping depth to 2D pixel coordinates).
iPad Pro (6th gen): misalignment occurs only when capturing high-resolution frames.
iPhone 16 Pro: depth is correctly registered for both standard and high-resolution captures.
It appears the camera intrinsics, specifically the FOV, change between the “regular” resolution stream and the high-resolution capture on the iPad. My suspicion is that the depth data continues using the intrinsics of the lower resolution stream, resulting in an unregistered depth-to-RGB mapping.
Once I have the iPad in hand again, I will confirm whether camera.intrinsics or FOV differ between the low-res and high-res frames.
Is this a known issue with high-resolution frame capture on the 2022 iPad Pro? If not, I’m happy to provide some more thorough sample code.
Thanks for your time!
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type.
It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error:
"audioInputMode' is unavailable in visionOS
audioInputMode' has been explicitly marked unavailable here
RealityFoundation.VideoPlaybackController.audioInputMode)"
https://developer.apple.com/documentation/realitykit/videomaterial/
Code:
let player = AVPlayer(url: url)
// Instantiate and configure the video material.
let material = VideoMaterial(avPlayer: player)
// Configure audio playback mode.
material.controller.audioInputMode = .spatial // this line won’t compile.
VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2.
The videos are HEVC MPEG-4 codecs.
Is there any other way to do this, or is there a workaround available?
Thank you.
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds?
thank you!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Enterprise
Swift
Reality Composer Pro
visionOS
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
Hi all,
I'm currently developing a real-time object reconstruction app using ARKit. The goal is to scan large objects using ARKit’s depth and transform data, and generate a point cloud.
However, I’m facing a major challenge - Transform Drift / World Alignment Issues
The localToWorld transform provided by ARKit frequently seems to drift or become unstable across frames.
This results in misaligned point clouds even when the device is moved slowly or kept relatively still.
In some cases, a static surface scanned over a few seconds results in clearly misaligned fragments.
This makes it difficult to accurately stitch a multi-frame point cloud. I have experimented with various lighting conditions and object textures, but the issue persists in all cases. At times, the relative error between frames reaches up to 20 cm, while in other instances the error is minimal; however, the drift gradually accumulates over time, leading to an overall enlargement of the reconstructed object. I have attached images of both cases here.
Questions:
Are there specific conditions under which ARKit’s world transform is expected to drift?
Is there a way to detect or recover from this drift during runtime?
Any best practices for maintaining consistent tracking during scanning or measurement sessions?
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image?
in my code I set up the parameter for "DepthMapTexture" universal input node
and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture
this is where I define the plane entity.
and this is the shader graph
what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions.
and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly.
what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier.
Maybe I'm missing something when defining Entity, something that allows geometric modifications?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor