I have my immersive space set up like:
ImmersiveSpace(id: "Theater") {
ImmersiveTeleopView()
.environment(appModel)
.onAppear() {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full)
Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities.
My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this:
public enum IStyle: String, CaseIterable, Identifiable {
case mixedStyle, fullStyle
public var id: Self { self }
var style: ImmersionStyle {
switch self {
case .mixedStyle:
return .mixed
case .fullStyle:
return .full
}
}
}
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
// Immersion Style
public var immersionStyle: IStyle = .mixedStyle
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
App Overview
App Name: Extn Browser
Bundle ID: ai.extn.browser
Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment
Development Environment & SDK Versions
Component
Version
Xcode
26.2
Swift
6.2
visionOS Deployment Target
26.2
Swift Concurrency
MainActor isolation enabled
App is released in the TestFlight.
Frameworks Used
SwiftUI - UI framework
RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial
AVFoundation - AVPlayer, AVAudioSession
WebKit - WKWebView for browser functionality
Network - NWListener for local proxy server
Sphere Video Mechanism
The app creates an immersive 360° video experience using the following approach:
// 1. Create sphere mesh (10 meter radius for immersive viewing)
let mesh = MeshResource.generateSphere(radius: 10.0)
// 2. Create initial transparent material
var material = UnlitMaterial()
material.color = .init(tint: .clear)
// 3. Create entity and invert sphere (negative X scale)
let sphere = ModelEntity(mesh: mesh, materials: [material])
sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing
sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level
// 4. Create AVPlayer with video URL
let player = AVPlayer(url: videoURL)
// 5. Configure audio session for visionOS
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers])
try audioSession.setActive(true)
// 6. Create VideoMaterial and apply to sphere
let videoMaterial = VideoMaterial(avPlayer: player)
if var modelComponent = sphere.components[ModelComponent.self] {
modelComponent.materials = [videoMaterial]
sphere.components.set(modelComponent)
}
// 7. Start playback
player.play()
ImmersiveSpace Configuration
// browserApp.swift
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
Entitlements
<!-- browser.entitlements -->
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.network.client</key>
<true/>
<key>com.apple.security.network.server</key>
<true/>
Info.plist Network Configuration
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
The Issue
Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected.
Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present.
Important: Not a DRM/Licensing Issue
This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with:
Unlicensed raw MP4 video files (no DRM protection)
Self-hosted video content with no copy protection
Direct MP4 URLs from CDN without any licensing requirements
The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause.
(Plain H.264 MP4, no DRM)
Screen Recording: Working in Simulator
The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator:
https://cdn.commenda.kr/screen-001.mov
This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device.
Observations
AVPlayer status reports .readyToPlay - The video appears to load successfully
VideoMaterial is created without errors - No exceptions thrown
Sphere entity renders - The geometry is visible (black surface)
Audio session is configured - No errors during audio session setup
Network requests succeed - The video URL is accessible from the device
Same result with local/unprotected content - DRM is not a factor
Console Logs (Device)
The logging shows:
Sphere created and added to scene
AVPlayer created with correct URL
VideoMaterial created and applied
Player status transitions to .readyToPlay
player.play() called successfully
Rate shows 1.0 (playing)
Despite all success indicators, the rendered output is black.
Questions for Apple
Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware?
Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4)
Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device?
Does the immersion style (.mixed) affect VideoMaterial rendering on hardware?
Are there additional entitlements required for video texture rendering in RealityKit on physical hardware?
Attempted Solutions
Configured AVAudioSession with .playback category
Added delay before player.play() to ensure material is applied
Verified sphere scale inversion (-1, 1, 1)
Tested multiple video URLs (including raw, unlicensed MP4 files)
Confirmed network connectivity on device
Ruled out DRM/FairPlay issues by testing unprotected content
Environment Details
Device: Apple Vision Pro
visionOS Version: 26.2
Xcode Version: 26.2
macOS Version: Darwin 25.2.0
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't.
in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up.
here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards:
func loadMainScene(at position: SIMD3) async {
guard let content = self.content else { return }
do {
let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle)
rootEntity.scale = SIMD3<Float>(repeating: 0.5)
rootEntity.generateCollisionShapes(recursive: true)
self.modelRootEntity = rootEntity
let bounds = rootEntity.visualBounds(relativeTo: nil)
print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)")
let pivotEntity = Entity()
pivotEntity.addChild(rootEntity)
self.pivotEntity = pivotEntity
let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8])
modelAnchor.addChild(pivotEntity)
content.add(modelAnchor)
updateModelOpacity(0.5)
self.modelAnchor = modelAnchor
rootEntity.visit { entity in
print("👀 Entity in model: \(entity.name)")
if entity.name.lowercased().hasPrefix("focus") {
entity.generateCollisionShapes(recursive: true)
entity.components.set(InputTargetComponent())
print("🎯 Made tappable: \(entity.name)")
}
}
print("✅ Model loaded with collisions")
guard let sphere = placementSphere else { return }
let sphereWorldXform = sphere.transformMatrix(relativeTo: nil)
var newXform = sphereWorldXform
newXform.columns.3.y += 0.1 // move up by 20 cm
let gridAnchor = AnchorEntity(world: newXform)
self.gridAnchor = gridAnchor
content.add(gridAnchor)
let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle)
let gridSizeX = 18
let gridSizeY = 10
let gridSizeZ = 10
let spacing: Float = 0.05
let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3
let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1
let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5
for i in 0..<gridSizeX {
for j in 0..<gridSizeY {
for k in 0..<gridSizeZ {
if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top
let cell = baseScene.clone(recursive: true)
cell.name = "Sphere"
cell.scale = .one * 0.02
cell.position = [
startX + Float(i) * spacing,
startY + Float(j) * spacing,
startZ + Float(k) * spacing
]
cell.generateCollisionShapes(recursive: true)
gridCells.append(cell)
gridAnchor.addChild(cell)
}
}
}
content.add(gridAnchor)
print("✅ Grid added")
} catch {
print("❌ Failed to load: \(error)")
}
}
private func handleModelOrGridTap(_ tappedEntity: Entity) {
guard let modelRootEntity = modelRootEntity else { return }
let localPosition = tappedEntity.position(relativeTo: modelRootEntity)
let worldPosition = tappedEntity.position(relativeTo: nil)
switch tapStep {
case 0:
modelPointA = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0]))
print("📍 Model point A: \(localPosition)")
tapStep += 1
case 1:
modelPointB = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0]))
print("📍 Model point B: \(localPosition)")
tapStep += 1
case 2:
targetPointA = worldPosition
targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0])
modelAnchor?.addChild(targetMarkerA!)
print("✅ Target point A: \(worldPosition)")
tapStep += 1
case 3:
targetPointB = worldPosition
targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1])
modelAnchor?.addChild(targetMarkerB!)
print("✅ Target point B: \(worldPosition)")
alignmentReady = true
tapStep += 1
default:
print("⚠️ Unexpected tap on model helper at step \(tapStep)")
}
}
func alignModel2Points() {
guard let modelPointA = modelPointA,
let modelPointB = modelPointB,
let targetPointA = targetPointA,
let targetPointB = targetPointB,
let modelRootEntity = modelRootEntity,
let pivotEntity = pivotEntity,
let modelAnchor = modelAnchor else {
print("❌ Missing data for alignment")
return
}
let modelVec = modelPointB - modelPointA
let targetVec = targetPointB - targetPointA
let modelLength = length(modelVec)
let targetLength = length(targetVec)
let scale = targetLength / modelLength
let modelDir = normalize(modelVec)
let targetDir = normalize(targetVec)
var axis = cross(modelDir, targetDir)
let axisLength = length(axis)
var rotation = simd_quatf()
if axisLength < 1e-6 {
if dot(modelDir, targetDir) > 0 {
rotation = simd_quatf(angle: 0, axis: [0,1,0])
} else {
let up: SIMD3<Float> = [0,1,0]
axis = cross(modelDir, up)
if length(axis) < 1e-6 {
axis = cross(modelDir, [1,0,0])
}
rotation = simd_quatf(angle: .pi, axis: normalize(axis))
}
} else {
let dotProduct = dot(modelDir, targetDir)
let clampedDot = max(-1.0, min(dotProduct, 1.0))
let angle = acos(clampedDot)
rotation = simd_quatf(angle: angle, axis: normalize(axis))
}
modelRootEntity.scale = .one * scale
modelRootEntity.orientation = rotation
let transformedPointA = rotation.act(modelPointA * scale)
pivotEntity.position = -transformedPointA
modelAnchor.position = targetPointA
alignedModelPosition = modelAnchor.position
print("✅ Aligned with scale \(scale), rotation \(rotation)")
Topic:
Spatial Computing
SubTopic:
General
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro.
I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。
希望
优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
Hi,
Few days ago, Apple had another "Meet with Apple" event regarding immersive video : Create immersive media experiences for visionOS | Meet with Apple days one and 2 and it is not appearing on my Developer app in Mac. The same happened to
IETF HLS Interest Day | Meet with Apple.
Any tip?
Kind regards ,
Bruno
Topic:
Spatial Computing
SubTopic:
General
Has anyone had success with MeshInstancesComponent? I tried to follow the sample code from What's New in RealityKit but it wouldn't compile. I was able to use one of the init overloads to get it to compile, but using it crashes both my device and the simulator. Even with one instance.
Hello,
Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it.
I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience?
Thanks,
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for visionOS.
I saw that there is a new way to add SwiftUI View attachments in my RealityView, what advantages does this have over the old way?
Attachments can now be added directly to your entities with ViewAttachmentComponent. The removes the need to declare your attachments upfront in your RealityView initializer and then add those attachments as child entities. The new approach provides greater flexibility. Canyon Crosser and Petite Asteroids both utilize the new approach.
ManipulationComponent looks really cool! Right now my app has a series of complicated custom gestures. What gestures does it handle for me exactly, and are there any situations where I should prefer my own custom gestures?
ManipulationComponent provides natural interaction with virtual objects. It seamlessly handles translation and rotation. You can easily add manipulation to a SwiftUI view like Model3D with the manipulable view modifier.
The new Object Manipulation API is great for most apps, and is a breeze to implement, but sometimes you might want a more custom feel, and that’s ok! Custom gestures are still fully supported for that scenario.
I saw that there is a new API to also access the right main camera. What can I do with this?
Correct, in visionOS 26, you can access the left and right main cameras. You can even access them simultaneously as a stereo pair. Camera access still requires a managed entitlement and an enterprise license, see Accessing the main camera for more details about those requirements.
More computer vision and machine learning use-cases are unlocked with access to both cameras, we are excited to see what you will do!
What do I need to do to add spatial accessory input for my app?
First, use the GameController framework to establish a connection with the spatial accessory, and then listen for events from the controller. Then, you can use either RealityKit, ARKit, or a combination of both to track the accessory, anchor virtual content to it, and fine tune the accessory interaction with the content in your app.
For more details, check out Discovering and tracking spatial game controllers and styli.
By far, the most difficulty with implementing visionOS apps is SwiftUI window management…placing, opening, closing, etc. Are there any improvements to window management in visionOS 26?
Yes! We recommend watching Set the scene with SwiftUI in visionOS.
You can use the defaultLaunchBehavior to choose whether a particular window is presented (or suppressed) at launch. You can also prevent a window like a secondary toolbar from launching as the initial window using .restorationBehavior(.disabled). Adopting best practices for persistent UI provides a great overview of SwiftUI window management on visionOS.
As for placing windows, there is still no API for an app to specify the placement of its windows other than relative placement. If that is a feature you are interested in, please file an enhancement request for it using Feedback Assistant!
How to get access to the Enterprise API?
First, request the entitlement and license through your Apple Developer or enterprise account. Once these have been granted, include the license and entitlement in your project. Then you can build, test, and distribute as an in-house app.
Topic:
Spatial Computing
SubTopic:
General
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
Environment Versions
・macOS15.6.1
・visionOS26.0.1
・Xcode16.1 or 26.0.1
・unity6000.2.9f1
・Apple.core3.2.0
・Apple.PHASE1.2.7
・polyspatial2.4.2
With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted.
What is required to make Early Reflection and Late Reverb take effect on a visionOS device build?
action taken
・created a SoundEvent.
・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer.
・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb.
・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None.
・in project settings > Audio, set Spatializer plugin to PHASE Spatializer.
・from there, build for visionOS.
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
Topic:
Spatial Computing
SubTopic:
General
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output.
What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal?
Thanks!
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment:
Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider
Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform
On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity
Gate restore on relocalization/mesh updates and disable all raycast/search after restore
Issue:
After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Topic:
Spatial Computing
SubTopic:
General
Hi,
I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers.
In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event.
I am using Vision OS 26.3
This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click.
The sample I used was this one:
https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
Hi guys,
In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness.
However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness.
I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value.
My question is:
Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture?
Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!