Hi!
I'd like to share a technical sample app, SKRenderer Demo.
This app demonstrates:
Setting up SKRenderer
Recording SpriteKit scenes to image sequences
Recording SpriteKit scenes to video using IOSurface and AVFoundation
Applying Core Image filters
Exploring SpriteKit's simulation timing and physics determinism
Use Case
Record SpriteKit simulations as video or images for sharing and creating content.
I explored several approaches, including the excellent view.texture(from:crop:) for live recording from SKView. The SKRenderer approach assumes recording happens asynchronously: you capture user interactions as commands during live interaction, then replay those commands through an offline render pass to generate the final output.
I hope this helps others working on replay systems, simulation capture, or SpriteKit projects in general!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've tried out a ParticleEmitter in Reality Composer Pro to produce a burst of particles that don't move (i.e. speed close to zero).
When viewing from different angles, it clearly looks like the particles are rendered exactly in the wrong order, that is, front first and back last. In other words, back particles obscure front particles.
I would prefer it the correct way around.
I've only tried this interactively in Reality Composer Pro, not programmatically, but I assume I would get the same result.
My Reality Composer Pro "File" (zipped):
https://gert-rieger-edv.de/Posts/Post-1/RealityParticles.zip
Screenshot:
Click on the ParticleEmitter object, then on its Play button, then select the Particles tab and click on "Burst" a few times to get a few random particles.
Mac Studio 2025
Apple M4 Max
macOS 15.7.2 (24G325)
Reality Composer Pro
Version 2.0 (494.60.2)
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x.
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension UIColor {
var float4: Float4 {
if
cgColor.numberOfComponents == 4,
let c = cgColor.components
{
Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3]))
} else {
Float4()
}
}
}
struct ColourAction: EntityAction {
// MARK: - PUBLIC PROPERTIES
let startColour: Float4
let targetColour: Float4
// MARK: - PUBLIC COMPUTED PROPERTIES
var animatedValueType: (any AnimatableData.Type)? { Float4.self }
// MARK: - INITIATION
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
// MARK: - PUBLIC STATIC FUNCTIONS
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
// MARK: - PUBLIC FUNCTIONS
func changeColourTo(_ targetColour: UIColor, duration: Double) {
guard
let modelComponent = components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial
else {
return
}
let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach:
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard
let animationState = event.animationState,
let entity = event.targetEntity,
let modelComponent = entity.components[ModelComponent.self],
var material = modelComponent.materials.first as? PhysicallyBasedMaterial
else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
material.baseColor.tint = UIColor(interpolatedColour)
entity.components[ModelComponent.self]?.materials[0] = material
animationState.storeAnimatedValue(interpolatedColour)
}
}
So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
I am trying to create a simple portal like that in RealityKit, but using metal instead of RealityKit. Has anyone been able to create a window or portal like thing to show a skybox outside in mixed Reality?
Topic:
Graphics & Games
SubTopic:
Metal
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
Turn-based games: 2 players
When an opponent declines a game in the Game Center MatchMaker VC, that player sees that they quit, but no message is sent to the listener about that fact. For the person who started the match, their MMVC shows it's their turn
again. Why doesn't Game Center end the match?
Topic:
Graphics & Games
SubTopic:
General
I have code that captures a window and displays a cropped image. The problem is 2 fold. Kit doesn't seem to allow to modify stop and recapture image in window mode to capture a portion of the screen.
So this makes me having to crop and display the cropped image via a published variable. This all works find. But seems to stop after some time.
Using an M1 16gig ram. program is taking less than 100meg of mem with 40-70%cpu as the crow flies.
printing captured success in debug mode and sometimes frame isn't valid so guarding against it.
any ideas on how to improve my strategy?
App Storeにある『浮遊時計 Premium』は1Hzごとか10Hzごと、または3段階以上のリフレッシュレート計測はできますか?
Topic:
Graphics & Games
SubTopic:
General
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events.
Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame.
We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames.
This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K.
When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution.
Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window?
If it cannot work that way, what is an alternative?
After updating iPad/iPhone devices from iOS 18 to iOS 26, PhotogrammetrySession intermittently crashes during photogrammetry processing. The same workflow was stable on iOS 18 with no code changes to the app.
Environment:
OS versions: Works on OS 18, crashes on OS 26
Device: iPad/iPhone (reproducible across devices)
Source images: ~170-200 JPG files at 2160 x 3840 resolution
Reproduction:
The crash occurs consistently on the second or third sequential run of the photogrammetry session with the same image set. First run typically succeeds.
Crash details:
Xcode shows an uncaught exception during image processing:
terminating due to uncaught exception of type std::bad_alloc: std::bad_alloc
VTPixelTransferSession 420f sid 269 (2160.00 x 3840.00) [0.00 0.00 2160 3840]
rowbytes( 2160, 2160 ) Color( (null), 0x0, (null), (null), ITU_R_601_4 )
=> 24 sid 19 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 6528 )
Color( 0x0, (null), (null), (null) )
This appears to be a memory allocation failure in VTPixelTransferSession during color space conversion. Has anyone else experienced similar crashes with CorePhotogrammetry on iOS 26, or found workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
I have two devices (iPod, iPhone), each using a different Apple ID. I have an existing game to which I'm adding TBM. When the iPod invites the iPhone, it sends an iMessage invite to the iPhone; when I click on that message, I get "Retrieving", then Game Center in Settings is opened, not my App (same version installed on both devices). I start my App on the iPhone and that match is not shown in the Matchmaker View Controller.
When I send an invite from the iPhone to the iPod and I click on the iMessage invite, the app starts but the match isn't listed in the MatchMaker ViewController on the iPod (but is on the iPhone).
In addition, when I click on the info circle on the iPhone, it who's the two players and "App Store" under the Game Center name. However, When I do the same on the iPod, it has a "Play your turn" there.
Any ideas?
Problem Summary
After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds.
Environment Details
Operating System: iOS 26.1 & iOS 26.2
Framework: RealityKit
Xcode Version: 16.2 (16C5032a)
Expected vs. Actual Behavior
Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier.
Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience.
Steps to Reproduce
Create or open an AR application with RealityKit that uses particle components
Attach a ParticleEmitterComponent to an entity via a custom system
Run the application on iOS 26.1 or iOS 26.2
Observe that particles render at an offset position away from the entity
Minimal Code Example
Here's the setup from my test case:
Custom Component & System:
struct SparkleComponent4: Component {}
class SparkleSystem4: System {
static let query = EntityQuery(where: .has(SparkleComponent4.self))
required init(scene: Scene) {}
func update(context: SceneUpdateContext) {
for entity in context.scene.performQuery(Self.query) {
// Only add once
if entity.components.has(ParticleEmitterComponent.self) { continue }
var newEmitter = ParticleEmitterComponent()
newEmitter.mainEmitter.color = .constant(.single(.red))
entity.components.set(newEmitter)
}
}
}
AR Setup:
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = Entity()
model.components.set(ModelComponent(mesh: boxMesh, materials: [material]))
model.components.set(SparkleComponent4())
model.position = [0, 0.05, 0]
model.name = "MyCube"
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2]))
anchor.addChild(model)
arView.scene.addAnchor(anchor)
Questions for the Community
Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2?
Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning?
Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing?
Additional Information
I've already submitted this issue via Feedback Assistant(FB21346746)
Hello!
I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups?
The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block.
I assume this is the default configuration. So when one does the following:
Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H
Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 }
I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile.
Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder?
Will be grateful for help.
Michał
I am working on a remote control application for macOS where I need to maintain two "virtual" cursors:
Remote Cursor: Follows the remote user's movements.
Local Cursor: Follows the local user's physical mouse/trackpad movements.
To move the system cursor (for the remote side), I use CGWarpMouseCursorPosition as follows:
void DualCursorMac::UpdateSystemCursorPosition(int x, int y) {
CGPoint point = CGPointMake(static_cast<CGFloat>(x), static_cast<CGFloat>(y));
// Warp the cursor to match remote coordinates
CGWarpMouseCursorPosition(point);
}
Meanwhile, I use a CGEventTap to monitor local physical movements to update my local virtual cursor's UI:
CGEventRef Mouse::MouseTapCallback(CGEventTapProxy proxy, CGEventType type, CGEventRef event, void *refcon) {
if (remoteControlMode) {
// We want to suppress system cursor movement but still read the delta
const int deltaX = static_cast<int>(CGEventGetIntegerValueField(event, kCGMouseEventDeltaX));
const int deltaY = static_cast<int>(CGEventGetIntegerValueField(event, kCGMouseEventDeltaY));
NSLog(@"MouseTapCallback: delta:(%d, %d)", deltaX, deltaY);
// Update local virtual cursor UI based on deltas...
return nullptr; // Consume the event
}
return event;
}
The Problem:
When CGWarpMouseCursorPosition is called frequently to update the system cursor, it interferes with the kCGMouseEventDeltaX/Y values in the Event Tap.
Specifically, if the local user moves the trackpad slowly (expecting deltas of 1 or 2), but a "Warp" occurs simultaneously (e.g., jumping the cursor from (100, 100) to (300, 300)), the deltaX and deltaY in the callback suddenly spike to very large values. It seems the system calculates the delta based on the new warped position rather than the pure physical displacement of the trackpad.
This makes the local virtual cursor "jump" erratically and makes it impossible to track smooth local movement during remote control.
My Question:
Is there a way to get the "raw" or "pure" physical relative movement (delta) from the trackpad/mouse that is independent of the system cursor's absolute position or warping?
Are there alternative APIs (perhaps IOKit or different CGEvent fields) that would allow me to get consistent deltas even when the cursor is being warped programmatically?
My procedural animation using IKRig occasionally jitters and throws this warning. It completely breaks immersion.
getMoCapTask: Rig must be of type kCoreIKSolverTypePoseRetarget
Any idea where to start looking or what could be causing this?
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
Hello
I would like to know how to combine 2 animations with RealityKit (one animation for the arms and one for the legs for example)
I saw this apple demo that seems to explain it but I don't understand at all how to do it...
Thanks
Hi all,
Wondering how I would go about creating a plugin/class to support a new (physical/hardware) device with the game controller framework?
Between GCVirtualController on iOS and the "KeyboardAndMouseSupport.bundle" I see inside GameController.framework on my Mac, it looks like the framework must be designed to support this but I can't find any documentation.
Thanks!
Added achievements to my approved app. Added them for the next release version, which I am running in simulator. When I look at the Achievements page, I can see that there are 17 Achievements available (correct), but they all show as hidden, despite checking the "No" box in App Store Connect.
Topic:
Graphics & Games
SubTopic:
GameKit