In what scenario will an app receive the limitExceeded PHPhotosError code? This case was added in iOS 26.1 and is not currently documented. What PhotoKit APIs can encounter this error and how should it be handled?
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Problem Description
(1) I am using ARKit in an iOS app to provide AR capabilities. Specifically, I'm trying to use the ARSession's captureHighResolutionFrame(using:) method to capture a high-resolution frame along with its corresponding depth data:
open func captureHighResolutionFrame(using photoSettings: AVCapturePhotoSettings?) async throws -> ARFrame
(2) However, when I attempt to do so, the call fails at runtime with the following error, which I captured from the Xcode debugger:
[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] settings.depthDataDeliveryEnabled must be NO if self.isDepthDataDeliveryEnabled is NO
Code Snippet Explanation
(1) ARConfig and ARSession Initialization
The following code configures the ARConfiguration and ARSession. A key part of this setup is setting the videoFormat to the one recommended for high-resolution frame capturing, as suggested by the documentation.
func start(imagesDirectory: URL, configuration: Configuration = Configuration()) {
// ... basic setup ...
let arConfig = ARWorldTrackingConfiguration()
arConfig.planeDetection = [.horizontal, .vertical]
// Enable various frame semantics for depth and segmentation
if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) {
arConfig.frameSemantics.insert(.smoothedSceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
arConfig.frameSemantics.insert(.sceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) {
arConfig.frameSemantics.insert(.personSegmentationWithDepth)
}
// Set the recommended video format for high-resolution captures
if let videoFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing {
arConfig.videoFormat = videoFormat
print("Enabled: High-Resolution Frame Capturing by selecting recommended video format.")
}
arSession.run(arConfig, options: [.resetTracking, .removeExistingAnchors])
// ...
}
(2) Capturing the High-Resolution Frame
The code below is intended to manually trigger the capture of a high-resolution frame. The goal is to obtain both a high-resolution color image and its associated high-resolution depth data. To achieve this, I explicitly set the isDepthDataDeliveryEnabled property of the AVCapturePhotoSettings object to true.
func requestImageCapture() async {
// ... guard statements ...
print("Manual image capture requested.")
if #available(iOS 16.0, *) { // Assuming 16.0+ for this API
if let defaultSettings = arSession.configuration?.videoFormat.defaultPhotoSettings {
// Create a mutable copy from the default settings, as recommended
let photoSettings = AVCapturePhotoSettings(from: defaultSettings)
// Explicitly enable depth data delivery for this capture request
photoSettings.isDepthDataDeliveryEnabled = true
do {
let highResFrame = try await arSession.captureHighResolutionFrame(using: photoSettings)
print("Successfully captured a high-resolution frame.")
if let initialDepthData = highResFrame.capturedDepthData {
// Process depth data...
} else {
print("High-resolution frame was captured, but it contains no depth data.")
}
} catch {
// The exception is caught here
print("Error capturing high-resolution frame: \(error.localizedDescription)")
}
}
}
// ...
}
Issue Confirmation & Question
(1) Through debugging, I have confirmed the following behavior: If I call captureHighResolutionFrame without providing the photoSettings parameter, or if photoSettings.isDepthDataDeliveryEnabled is set to false, the method successfully returns a high-resolution ARFrame, but its capturedDepthData is nil.
(2) The error message clearly indicates that settings.depthDataDeliveryEnabled can only be true if the underlying AVCapturePhotoOutput instance's own isDepthDataDeliveryEnabled property is also true.
(3) However, within the context of ARKit and ARSession, I cannot find any public API that would allow me to explicitly access and configure the underlying AVCapturePhotoOutput instance that ARSession manages.
(4) My question is:
Is there a way to configure the ARSession's internal AVCapturePhotoOutput to enable its isDepthDataDeliveryEnabled property? Or, is simultaneously capturing a high-resolution frame and its associated depth data simply not a supported use case in the current ARKit framework?
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app.
CameraViewController:
func settingsButtonTapped() {
#if isLockedCameraCaptureExtension
//App is launched from Lock Screen
//Launch main app here...
#else
//App is launched from Home Screen
self.showSettings(animated: true)
#endif
}
In this document:
https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen
Apple asks you to use:
func launchApp(with session: LockedCameraCaptureSession, info: String) {
Task {
do {
let activity = NSUserActivityTypeLockedCameraCapture
activity.userInfo = [UserInfoKey: info]
try await session.openApplication(for: activity)
} catch {
StatusManager.displayError("Unable to open app - \(error.localizedDescription)")
}
}
}
However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
On iOS 26.1, this throws on the 2020 iPad Pro (4th gen) but works fine on an M4 iPad Pro or iPhone 15 Pro:
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
It's just the standard code from Apple's own sample code so obviously used to work:
https://developer.apple.com/documentation/AVFoundation/capturing-depth-using-the-lidar-camera
Does it fail because Apple have silently dumped support for the older LiDAR sensor used prior to the M4 iPad Pro, or is there another reason? What about the 5th and 6th gen iPad Pro, does it still work on those?
we are working on HDR images and found this problem, asset.mediaSubtypes.contains(.photoHDR) always return false
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
guard !results.isEmpty else {return}
let fetchResult = PHAsset.fetchAssets(withLocalIdentifiers: results.map({$0.assetIdentifier!}), options: nil)
var selectedAssets:[PHAsset] = []
fetchResult.enumerateObjects { asset, _, _ in
selectedAssets.append(asset)
}
for i in selectedAssets.enumerated() {
let asset:PHAsset = i.element
print("\(i.offset): ")
print(PHAssetMediaSubtype.photoHDR.rawValue)
print(asset.mediaSubtypes)
print("photoHDR \(PHAssetMediaSubtype.photoHDR.rawValue): \(asset.mediaSubtypes.contains(.photoHDR))")
print("photoLive \(PHAssetMediaSubtype.photoLive.rawValue): \(asset.mediaSubtypes.contains(.photoLive))")
print("photoPanorama \(PHAssetMediaSubtype.photoPanorama.rawValue): \(asset.mediaSubtypes.contains(.photoPanorama))")
print("photoScreenshot \(PHAssetMediaSubtype.photoScreenshot.rawValue): \(asset.mediaSubtypes.contains(.photoScreenshot))")
print("videoHighFrameRate \(PHAssetMediaSubtype.videoHighFrameRate.rawValue): \(asset.mediaSubtypes.contains(.videoHighFrameRate))")
print("videoScreenRecording \(PHAssetMediaSubtype.videoScreenRecording.rawValue): \(asset.mediaSubtypes.contains(.videoScreenRecording))")
}
}
1.May I ask if the Background Upload can run normally in the Release version of ios 26.1? I used the Release version of ios 26.1 for debugging and found that the background upload couldn't be triggered for a long time.
I debugged in ios 26.2 and found that background upload could be triggered normally, but kept triggering an Error: "Error returned from daemon: error Domain=com.apple.accounts Code=7 "(null)"
Apple Watch app closes when changing photo permissions in the iPhone app.
App A is installed simultaneously on both the paired iPhone and Apple Watch.
I'm running App A on both my iPhone and Apple Watch.
When I change the photo permissions on App A installed on my iPhone, App A running on my Apple Watch automatically closes.
At first, I assumed App A on my iPhone was abnormally closing, causing App A on my Apple Watch to also close.
However, I've determined that changing the photo permissions is the cause of the app closing.
I don't think this behavior existed before WatchOS/iOS 26.
Is this behavior a natural addition to WatchOS/iOS 26?
If I go to the home screen while running app A on my iPhone and change its photo permissions in the Settings app, app A running on my Apple Watch automatically closes.
I'm trying to benchmark a Core Image filter chains memory footprint and notice a weird quirk in instruments.
On a real device, even with a simple Core Image chain, the memory balloons each time I ran the filter. See attached screen shots.
Running on iPhone 17 Pro:
Running on simulator (M2 Macbook Pro)
As you can see there's a huge build up of 4MB "VM: IOSurface" memory on the real device, but the simulator seems to clean it up correctly.
Here's my basic code:
func processImage() {
guard let inputImage = ContentViewModel.loadImageFromBundle(name: "kitty.HEIC") else {
print("Failed to load sample_image from bundle")
return
}
var outputImage = inputImage
outputImage = outputImage.applyingFilter("CIBloom", parameters: [
kCIInputRadiusKey: 20,
kCIInputIntensityKey: 0.8
])
DispatchQueue.global(qos: .userInitiated).async {
let data = self.context.jpegRepresentation(of: outputImage, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!)
if let data = data, let uiImage = UIImage(data: data) {
DispatchQueue.main.async {
self.displayImage = Image(uiImage: uiImage)
}
}
}
}
Why is this happening? Seems like a bug to me or I need to release an object. At the very least makes it challenging to measure memory usage.
Any help is greatly appreciated.
Alex
Device: iPhone 17 Pro
iOS Version: iOS 26.1
Camera: Ultra-wide (0.5x) using AVCaptureSession
Our camera app freezes on iPhone 17 when switching frame rates (30fps ↔ 60fps). This works fine on iPhone 16 Pro and earlier.
What We've Observed:
Freeze happens on frame rate change - particularly when stabilization was enabled
Thread.sleep is used - to allow camera hardware to settle before re-enabling stabilization
Works on older iPhones - only iPhone 17 exhibits this behavior
Console shows these errors before freeze:
17281
<<<< FigXPCUtilities >>>> signalled err=18446744073709534335 <<<< FigCaptureSourceRemote >>>> err=-17281
Is Thread.sleep on the main thread causing the freeze? Should all camera configuration be on a background queue?
Is there something specific about iPhone 17 ultra-wide camera that requires different handling?
Should we use session.beginConfiguration() / session.commitConfiguration() instead of direct device configuration?
Is calling setFrameRate from a property's didSet (which runs synchronously) problematic?
Are the FigCaptureSourceRemote errors (-17281) indicative of the problem, and what do they mean?
Hey,
There seems to be an inconsistency when capturing a photo using
QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced.
See below:
with .Quality
with .Balanced
This does not happen on the other lenses.
I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful.
Thanks,
Alex
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized
Iinfo.plist Privacy - Photo Library Usage Description set
I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am new to Swift and iOS development, and I have a question about video capture performance.
Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)?
I want to know:
whether iOS devices support capturing video at that resolution,
whether the frame rate drops significantly at that scale,
and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution.
If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.
We are planning to develop an app that connects to a UVC camera to capture and display video via AVFoundation. Could you please advise on which iPhone models support UVC cameras?
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application.
1-The assets have finished uploading, but I'm unable to retrieve successful records using PHAssetResourceUploadJob.fetchJobs(action: .acknowledge, options: nil). When will the successful records be returned?
2-How to retrieve the system's pending tasks ? We want to cancel tasks handed over to the system when returning to the main app to avoid duplicate resource uploads.
3-When we set UploadJobExtensionEnabled = true, will tasks handed over to the system still execute after returning to the main app? Do we need to set UploadJobExtensionEnabled = false upon returning to the main app? If we set UploadJobExtensionEnabled = false, will previously submitted upload tasks be cleared?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application.
1-The assets have finished uploading, but I'm unable to retrieve successful records using PHAssetResourceUploadJob.fetchJobs(action: .acknowledge, options: nil). When will the successful records be returned?
2-How to retrieve the system's pending tasks? We want to cancel tasks handed over to the system when returning to the main app to avoid duplicate resource uploads.
3-When we set UploadJobExtensionEnabled = true, will tasks handed over to the system still execute after returning to the main app? Do we need to set UploadJobExtensionEnabled = false upon returning to the main app? If we set UploadJobExtensionEnabled = false, will previously submitted upload tasks be cleared?
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am developing an iOS camera app that can record video directly to external storage connected to an iPhone.
To detect whether an external USB storage device is connected and to obtain its URL, I am considering using AVExternalStorageDeviceDiscoverySession.
However, when checking support using AVExternalStorageDeviceDiscoverySession.isSupported, I observe that it returns true only on Pro model iPhones, and false on non-Pro models in my environment.
I have reviewed Apple’s official documentation, but I could not find any clear description of the supported devices or requirements (for example, whether this API is limited to Pro models or requires specific hardware capabilities).
I would appreciate any information regarding the following points:
●The actual requirements for AVExternalStorageDeviceDiscoverySession to be supported
Device limitations (Pro vs non-Pro models)
Hardware requirements (USB controller, external recording capability, etc.)
iOS version dependencies
●Whether support for non-Pro models is planned in the future
Tested environments
iPhone 16 Pro (iOS 18.7.1) → isSupported == true
iPhone 16e (iOS 26.2) → isSupported == false
iPhone 17 (iOS 26.2) → isSupported == false
iPhone Air (iOS 26.2) → isSupported == false
If anyone has observed similar behavior or has official information from Apple regarding this API, I would greatly appreciate your insights.
I am developing an iOS camera app that can record video directly to external storage connected to an iPhone.
To detect whether an external USB storage device is connected and to obtain its URL, I am considering using AVExternalStorageDeviceDiscoverySession.
However, when checking support using AVExternalStorageDeviceDiscoverySession.isSupported, I observe that it returns true only on Pro model iPhones, and false on non-Pro models in my environment.
I have reviewed Apple’s official documentation, but I could not find any clear description of the supported devices or requirements (for example, whether this API is limited to Pro models or requires specific hardware capabilities).
I would appreciate any information regarding the following points:
①The actual requirements for AVExternalStorageDeviceDiscoverySession to be supported
Device limitations (Pro vs non-Pro models)
Hardware requirements (USB controller, external recording capability, etc.)
iOS version dependencies
②Whether support for non-Pro models is planned in the future
Tested environments
iPhone 16 Pro (iOS 18.7.1) → isSupported == true
iPhone 16e (iOS 26.2) → isSupported == false
iPhone 17 (iOS 26.2) → isSupported == false
iPhone Air (iOS 26.2) → isSupported == false
If anyone has observed similar behavior or has official information from Apple regarding this API, I would greatly appreciate your insights.
I’m writing to report a serious usability regression in the iOS 26 Photos app. Folders can still be created and albums can still be assigned to them, but folders can no longer be opened to view the albums they contain. A container that cannot be opened is not a container, and this breaks a fundamental information architecture model that has existed in Photos for well over a decade.
This change disproportionately harms users who maintain large, intentional photo libraries—travel archives, projects, professional work, or long-term personal documentation—where hierarchy and ordering are essential. Search and automated surfacing are not substitutes for deliberate structure. Removing the ability to browse folder → album hierarchy on iOS strips users of control while still exposing the UI for folder creation, which is internally inconsistent.
If this behavior is intentional, it should be clearly documented and the folder UI removed to avoid misleading users. If it is not intentional, it needs urgent correction. At minimum, iOS should retain parity with macOS Photos for basic navigation of folders and albums. This is not a niche request; it is a regression in core functionality.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi everyone,
I'm developing a camera application that requires precise, predictable control over the focus system. I'm encountering unexpected behavior with face-driven autofocus in continuous autofocus mode.
Issue:
When using AVCaptureDevice.FocusMode.continuousAutoFocus, the system continues to prioritize faces for focus even after attempting to disable face-driven autofocus with:
device.automaticallyAdjustsFaceDrivenAutoFocusEnabled = false
device.isFaceDrivenAutoFocusEnabled = false
Observations:
The behavior is inconsistent across different scenes.
In well-lit/properly exposed scenes: focus persistently locks onto faces, ignoring my configuration.
.
In underexposed scenes: the intended focus behavior is more consistently respected.
Environment
Device: iPhone 15 Pro
iOS: iOS 18.0
Framework: AVFoundation
App type: Custom camera app using AVCaptureSession + AVCaptureVideoPreviewLayer
I’m seeing an intermittent but frequent issue where the camera preview layer briefly flashes empty after certain interruptions, even though the capture session reports itself as running and no errors are emitted.
This happens most often after:
Locking and unlocking the device
Switching cameras (back ↔ front)
The issue is not 100% reproducible, but occurs often enough to be noticeable in normal usage.
What happens
The preview layer briefly flashes as empty (sometimes just a “micro-frame”)
Duration: typically ~0.5–2 seconds before frames resume
session.isRunning == true throughout
No crash, no runtime error, no interruption end failure
Focus/exposure restore correctly once frames resume
Visually it looks like the preview layer loses frames temporarily, even though the session appears healthy.
Repro
Intermittent but frequent after:
Lock → unlock device
Switching camera (front/back)
Timing-dependent and non-deterministic
Happens multiple times per session, but not every time
Key observation
AVCaptureSession.isRunning == true does not guarantee that frames are actually flowing.
To verify this, I added an AVCaptureVideoDataOutput temporarily:
During the blank period, no sample buffers are delivered
Frames resume after ~1–2s without any explicit restart
Session state remains “running” the entire time
What I’ve tried (did NOT fix it)
Adding delays before/after startRunning() (0.1–0.5s)
Calling startRunning() on different queues
Restarting the session in AVCaptureSessionInterruptionEnded
Verifying session.connections (all show isActive == true)
Rebuilding inputs/outputs during interruption recovery
Ensuring startRunning() is never called between beginConfiguration() / commitConfiguration()
(Hit the expected runtime warning when attempted)
None of the above removed the brief blank preview.
Workaround (works visually but expensive)
This visually fixes the issue, but:
Energy impact jumps from Low → High in Xcode Energy Gauge
AVCaptureVideoDataOutput processes 30–60 FPS continuously
The gap only lasts ~1–2s, but toggling the delegate on/off cleanly is difficult
Overall CPU and energy cost is not acceptable for production
Additional notes
CPU usage is already relatively high even without the workaround (this app is camera-heavy by nature)
With the workaround enabled, energy impact becomes noticeably worse
The issue feels like a timing/state desync between session state and actual frame delivery, not a UI issue
Questions
Is this a known behavior where AVCaptureSession.isRunning == true but frames are temporarily unavailable after interruptions?
Is there a recommended way to detect actual frame flow resumption (not just session state)?
Should the AVCaptureVideoPreviewLayer.connection (isActive / isEnabled) be explicitly checked or reset after interruptions?
Is there a lightweight, energy-efficient way to bridge this short “no frames” gap without using AVCaptureVideoDataOutput?
Is rebuilding the entire session the only reliable solution here, or is there a better pattern Apple recommends?