I am conducting a forensic examination of a video, and it would be helpful if anyone knew how to decode the LiveTrackInfo of the metadata of a QuickTake .MOV recorded on iOS 17.5.1 There are 27 different fields, and I am not sure what each one represents. Any help would be appreciated! Thank you!
log-file
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
33 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Dear colleagues, when will you add the ability to manually adjust the display's color temperature? Everyone is familiar with the color rendering issue on the 17 series. Many complain about the display's yellowish tint. I'm one of those people, with a G9N panel, but I can't get whites right; there's a persistent yellow tint. TrueTone only solves this problem under cool, white lighting conditions. So, the display might work as intended, but how can I make it work consistently? If there were a way to manually adjust TrueTone, many users wouldn't be so upset when buying a new device, fearing the display would be yellow. I'd like users to be able to choose their preferred color, warm or cool, and have a scale to adjust it! This would solve the yellowish tint issue on 17 series displays! Thank you.
Hello Apple Developer Forums,
I’m preparing to submit an app update that includes an in-app subscription. As part of the submission, I need to provide screenshots showing where the user initiates and completes the subscription purchase flow.
The issue is that App Store Connect keeps rejecting my screenshot upload with an “incorrect size” (or size invalid) error. I have already tried exporting the screenshot in all sizes and resolutions described in Apple’s documentation, but none of them are being accepted so far.
Could you please advise:
What exact pixel dimensions / format requirements App Store Connect currently enforces for these screenshots (including file type and color profile, if relevant)?
Whether there are any known issues or common causes for this error (e.g., metadata, alpha channel, scaling, or export settings)?
Any recommended workflow/tools to generate a compliant screenshot that reliably uploads?
Thank you in advance for your help.
Hi everyone, does anybody have any resources I could check out regarding the 48->12mp binning behavior on supported sensors? I know the 48mp sensor on iPhone can automatically bin pixels for better low light performance. But not sure how to reliably make this happen in practice.
On iPhone 14 Pro+ with a 48MP sensor, I want the best of both worlds for ProRAW:
∙ Bright light: 48MP full resolution
∙ Low light: 12MP pixel-binned for better noise
`photoOutput.maxPhotoDimensions = CMVideoDimensions(width: 8064, height: 6048)
let settings = AVCapturePhotoSettings(rawPixelFormatType: proRawFormat, processedFormat: [...])
settings.photoQualityPrioritization = .quality
// NOT setting settings.maxPhotoDimensions — always get 12MP`
When I omit maxPhotoDimensions, iOS always returns 12MP regardless of lighting. When I set it to 48MP, I always get 48MP.
Is there an API to let iOS automatically choose the optimal resolution based on conditions, or should I detect low light myself (via device.iso / exposureDuration) and set maxPhotoDimensions accordingly?
Any help or direction would be much appreciated!
Hi, I’m trying to integrate my iOS app with Shortcuts.
My goal is:
In the Shortcuts app → Create a shortcut → Select an image → Share the image directly to my app for analysis.
However, when I try to add the “Share with App” / “Open in App” / “Send to App” action in Shortcuts:
My app does NOT appear in the list of available apps.
I want my app to be selectable so that Shortcuts can send an image (UIImage / file) to my app.
What I have tried
My app supports receiving images using UIActivityViewController and Share Extension.
I created an App Intents extension (AppIntent + @Parameter(file)...) but the app still does not appear in Shortcuts “Share with App”.
I also checked the Info.plist but didn’t find any permission related to Shortcuts.
The app is installed on the device and works normally.
My question
What permission, Info.plist entry, or capability is required so that my app becomes visible in the Shortcuts app as a target for image sharing?
More specifically:
Which extension type should be used for receiving images from Shortcuts?
App Intents Extension?
Share Extension?
Intent Extension?
Do I need a specific NSExtensionPointIdentifier for Shortcuts integration?
Do I need to declare a custom Uniform Type Identifier (UTI) or add supported content types so Shortcuts knows my app can handle images?
Are there any required entitlements / capabilities to make the app appear inside the “Share with App” action?
Goal Summary
I simply want:
Shortcuts → Pick Image → Send to My App → App receives the image and processes it.
But currently my app cannot be selected in Shortcuts.
Thanks in advance for any guidance!
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Image I/O
Extensions
App Intents
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator.
No error is returned either.
But this error message now appears in the console:
Error returned from iconservicesagent image request: <ISTypeIcon: 0x3010f91a0>,Type: com.adobe.pdf - <ISImageDescriptor: 0x302f188c0> - (36.00, 36.00)@3x v:1 l:5 a:0:0:0:0 t:() b:0 s:2 ps:0 digest: B19540FD-0449-3E89-AC50-38F92F9760FE error: Error Domain=NSOSStatusErrorDomain Code=-609 "Client is disallowed from making such an icon request" UserInfo={NSLocalizedDescription=Client is disallowed from making such an icon request}
Does anyone know this error? Is there a workaround?
Are there new permissions to consider?
Here is the code how icons are generated:
let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: scale, representationTypes: self.thumbnailType)
request.iconMode = true
let generator = QLThumbnailGenerator.shared
generator.generateRepresentations(for: request) { [weak self] thumbnail, _, error in
}
“iOS 26 + BGContinuedProcessingTask: Why does a CPU/ML-intensive job run 4-5× slower in background?”
Hello All,
I’m a mobile-app developer working with iOS 26+ and I’m using BGContinuedProcessingTask to perform background work. My app’s workflow includes the following business logic:
Loading images via PHImageRequest.
Using a CLIP model to extract image embeddings.
Using an .mlmodel-based model to further process those embeddings.
For both model inferences I set computeUnits = .cpuAndNeuralEngine.
When the app is moved to the background, I observe that the same workload(all three workload) becomes on average 4-5× slower than when the app is in the foreground.
In an attempt to diagnose the slowdown, I tried to profile with Xcode Instruments, but since a debugger was attached, the performance in background appeared nearly identical to foreground. Even when I detached the debugger, the measured system resource metrics (process CPU usage, system CPU usage, memory, QoS class, thermal state) showed no meaningful difference.
Below are some of the metrics I captured:
Process CPU: 177% (Foreground) → 153% (Background) → ~-24.1%
Still >1.5 cores of work.
System CPU: 56.1% → 38.4% → ~-17.7%
Process Memory: 244.8 MB → 218.1 MB
QoS Class: userInitiated in both cases
Thermal State: nominal in both cases
Given these results, I’m finding it hard to pinpoint why the overall latency is so much worse when the app is backgrounded, even though the obvious metrics show little variation.
I suspect the cause may involve P-core vs E-core scheduling, or internal hardware throttling/limit of Neural Engine usage, but I cannot find clear documentation or logging to confirm this.
My question is:
Does anyone know why a CPU (and Neural Engine)-intensive job like this would slow down so dramatically when using BGContinuedProcessingTask in the background on iOS 26+, despite apparent similar resource-usage metrics?
Are there internal iOS scheduling/hardware-allocation behaviors (e.g., falling back to lower-performing cores when backgrounded) that might explain this?
Any pointers to Apple technical notes, system logs, or instrumentation I might use to detect which cores or compute units are being used would be greatly appreciated.
Thank you for your time and any guidance you can provide.
Best regards,
I want to create GIF file and then use UIImage to it.
在正常游戏中,如果非常频繁的调用assetBundle.Unload接口,会导致游戏应用画面卡死,但是游戏的背景音乐仍然正常播放。这类问题仅发生在iphone16 和iphone17的手机上,低版本的手机没有任何问题,请问该如何解决这个问题?
Hi everyone,
I’m working on a custom camera implementation in iOS using native code. My goal is to capture unprocessed, realistic images directly from the camera — without any filters or post-image processing applied by the system.
I’ve implemented RAW image capture using the native camera APIs (AVFoundation) and successfully received .dng files. However, even the RAW outputs don’t look like the real environment — the colors, tone, and exposure still seem processed or corrected in some way.
I’ve tried various configurations such as photoSettings.rawPhotoPixelFormatType, experimenting with AVCaptureDevice and AVCapturePhotoOutput settings, and reviewing ProRAW and standard RAW behavior, but I’m still not getting truly unprocessed results that reflect the actual sensor data.
Has anyone experienced similar results when capturing RAW images on iOS, or found a way to bypass Apple’s image signal processing (ISP) pipeline for more realistic captures?
Any insights or references from Apple’s camera framework behavior would be greatly appreciated.
Thank you!
I'm working on an AppIcon selector and would like to do something like UIImage(named: "AppIcon-Alternate") to present the icon for the user to choose using the new IconComposer icons.
I've done a fair bit of research on this and it looks like this used to be possible (prior to .icon) with workarounds that were later 'fixed' / removed (appending 60x60 to the icon name).
The only 'solution' seems to be bundling the exported images into the app itself but this seems like a terrible idea as it massively bloats the app. Assuming we export from the new IconComposer tool and want to include dark mode that's roughly 3MB per icon which is absolutely shocking bloat and so a terrible solution.
Looking into the app the Assets.car actually generates png files for these alternate icons. These are in the json as "MultiSized Image" assets. Interestingly using UIImage(named: is actually attempting to load these but fails to resolve an kCSIElementSignature. Also the OS alert when switching alternate icon shows a preview of the icon so this must be privately possible and using Asset Catalog Tinkerer I'm able to see these pngs.
This feels like broken API; I'd guess the new icon format is not correctly generating the entry in the Asset.car to link the generated pngs for usage with UIImage(named:) API.
Does anyone have pointers for this? This feels like a developer API afterthought or bug but is it intentional?
Edit: I've submitted feedback for this FB20341182.
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following:
This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit).
The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2.
Those images are rendered correctly in Photos and Preview on all platforms.
When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly.
The issue only occurs when loading the image via Core Image.
When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result.
Mac (AppKit):
Catalyst:
Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst.
We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way.
Or might there be a better workaround?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
Core Image
Core Graphics
My app uses VStack and HStack, instead of the normal table format. When I try to print everything works perfect but it will not print the cell outline.
What is the correct line code or instruction terminology?
Thanks, Hal
I have a small .mov I created using screenshot and I want to use it as a preview. I have managed to resize it to the required 1920x1080, added a sound track using ffmpeg (home-brew), drop it into an iMovie App preview project, share it as a file, drag that file to App Store Connect/Apps/myApp/"App previews and Screenshots" only to have it rejected for "frame rate too high", 30 fps required. There appears to be no way to specify frame rate in "Screenshot" nor iMovie during "share". Aside from using a third party app "Handbrake" to edit the file, what can be done?
Maybe more importantly, why is 30 fps required when it isn't a standard output of screenshot nor iMovie/AppPreviewProject ?
btw: iMovie/AppPreview/Help shows submittal of non-1920x1080 files to AppStoreConnect
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Tags:
Image I/O
Media Player
App Store Connect
Currently, I’m working on developing a small macOS utility tool for my photography. In my camera, I have a digital zoom feature. I prefer using this feature when I shoot both JPEG and DNG files. While the JPEG is already cropped to the desired format, the DNG file contains metadata (DefaultUserCrop: 0.22, 0.22, 0.78, 0.78). For instance, when I open that DNG file in Lightroom, it pre-crops the image non-destructively. However, I prefer using Pixelmator Pro for editing. Unfortunately, Pixelmator Pro doesn’t have this feature. So, I thought I could create an app that allows me to pre-crop the image for editing in Pixelmator Pro afterward.
Does someone have a better idea or some hints on how I could solve it?
I'm working with images selected from the iOS Photos app using PHPickerViewController.
Some images appear as HEIF in the Photos info panel — which I understand are stored in the HEIC format, i.e., HEIF containers with HEVC-compressed images, commonly used on iOS when "High Efficiency" is enabled.
To convert these images to JPEG, I'm using the standard UIKit approach:
if let image = UIImage(data: heicData) {
let jpegData = image.jpegData(compressionQuality: 1.0)
}
However, I’ve noticed that this conversion often increases the image size significantly:
Original HEIC/HEIF: ~3 MB
Converted JPEG (quality: 1.0): ~8–12 MB
There’s no resolution change or image editing — it’s just a direct conversion. I understand that HEIC is more efficient than JPEG, but the increase in file size feels disproportionate. Is this kind of jump expected, or are there any recommended workarounds to avoid it?
it will use about 300MB memory, it cause a memory peak
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image