I'm reposting here my FB17602742, submitted yesterday:
The strong wording of this message comes from years of Apple ignoring the needs of users who can't tolerate UI animations and convulsions.
At this point, it's clear that Apple is either intentionally harming users like me or simply doesn't care about meeting even the most basic accessibility standards on macOS.
Yes, many UI animations and convulsions can, fortunately, be disabled - but not through straightforward UI controls. Instead, users are forced to look for obscure Terminal commands found scattered across the Internet.
The "Reduce motion" checkbox in System Settings is simply a fake control that doesn't do anything - instead of actually disabling all UI animations and convulsions.
What's worse, two of the most offensive UI animations cannot be disabled at all. Apple has consistently dismissed requests to let users disable the following UI animations:
Scroll bar rollover highlight effect (introduced on macOS 10.7.3). Every time the cursor passes over a scroll bar, it gets highlighted. This draws the user's attention to random scroll bars for no reason - just because the cursor happened to pass over them. It results in HUNDREDS of unnecessary, annoying events of distraction daily!
Expand/collapse animation of NSOutlineView (e.g., when opening/closing folders in the list view in the Finder, or any other app using outline views). This animation is extremely distracting, irritating, and time-wasting.
Global Accessibility Awareness Day is approaching.
Dear Apple,
Please adhere to the most basic accessibility standards. Stop the needless suffering of countless users like me. Let us disable the two aforementioned UI convulsions.
Thank you for your attention to the issue.
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I need to direct text-to-speech generated audio from my app simultaneously to a bluetooth speaker device AND to the internal iPad speaker. The app uses AVSpeechSynthesizer and several third party speech engines. How best to do this?
I noticed the outputChannels property on AVSpeechSynthesizer...are there any examples of how to use this?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hey everyone!
I am developing a screen time limit app to help people spend less time in distracting apps.
It works this way: people choose unhealthy apps for them and opposite productivity apps. In the app you can exchange time spent on healthy habits to scroll or use other distracting apps.
This idea was loved by social media, and the app already has 100k followers on social media without even being launched yet.
So I am waiting just for one feature permission from Apple, and they have not given me any answer since I applied 3 weeks ago.
There are a lot of similar apps on the market, and this feature exists in other screen time limit apps.
Why is app blocking permission needed?
Time Exchange Functionality:
Users independently select which apps are productive and which are distracting for them.
The system blocks the "negative" apps until the user accumulates enough time in the "positive" ones. This encourages healthy device usage.
Full User Control:
All apps to be blocked are manually selected by the user in the settings.
The extension does not impose any restrictions without explicit permission.
Transparency and Security:
Blocking happens locally, with no data collected about app usage.
We adhere to Apple’s privacy policy.
Compliance with App Store Guidelines:
We understand that app blocking is a sensitive feature, but in our case it:
Is used for the benefit of the user (digital detox, productivity improvement).
Does not interfere with system processes or other developers’ apps.
Does not misuse access to APIs.
My question to the forum is:
Did you have similar problems, and how did you resolve them?
Are there any ways to speed up the process or contact someone from the approval team directly?
Should I give up and release it on Android?
I am very disappointed and frustrated. Hope to get some useful tips.
Thank you very much!
Topic:
Accessibility & Inclusion
SubTopic:
General
I have a UITextField in my application for entering a state. If I tap on it, a UIPickerView pops up and let's the user select a state (but they can still type too).
The issue relates to Full Keyboard Access. If we select the UITextField using an external keyboard, the UIPickerView appears, but in order to get to it the user has to tab through the whole view controller to get to the UIPickerView at the end.
What would be nice is to a) move focus directly to the UIPickerView (have it highlighted in blue and scrollable right away with keyboard) or b) make the UIPickerView the next view that's accessible when tabbing over or using the arrow keys.
I've tried using:
UIAccessibility notifications (both .screenChanged and .layoutChanged, with and without a delay). This ended up only announcing the view, but didn't help with full keyboard access.
Making the UIPickerView a first responder when it appears.
Attempting to change the accessibilityElements order (but with so many views and views within views, this isn't really a viable option either).
Pressing tab + -> (tab and right arrow button) will quickly take the user to the end of the chain of accessibility elements, in other words, to the UIPickerView. But there has to be a cleaner way of just automatically setting the focus to the UIPickerView or making it the next element by pressing the arrow key.
Hello,
I'm currently unable to access App Store Connect. When I try to open https://appstoreconnect.apple.com, I receive the following error message:
“appstoreconnect.apple.com is currently unable to handle this request.”
I’ve tried the following steps, but the issue persists:
Cleared browser cache and cookies
Tried different browsers (Safari, Chrome)
Attempted from multiple devices and networks
Is this a known issue or is there any workaround available?
Would appreciate any help or update on the current status.
Thank you,
Topic:
Accessibility & Inclusion
SubTopic:
General
The Text view seems to automatically prevent orphaned words on screen, but barring exceptional circumstances such as the font size being too large.
I couldn't find any documentation on this behaviour, how to configure, and would also be very interested in how it's implemented?
Thanks!
Topic:
Accessibility & Inclusion
SubTopic:
General
I have subscribed to the developer program, but it’s already been a day and it still shows “is not enrolled in the Apple Developer Program.”
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
After updating to the iOS 26 Beta version, the screenshot option within the AssistiveTouch menu has stopped working. Tapping on the "Screenshot" icon does not perform any action.
Topic:
Accessibility & Inclusion
SubTopic:
General
there is no possibility to sett the allow mobile Data switch I have the latest update but still does not work and I realised it when I went to another country and I could not sett my Mobile data and when I came back still I could not.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi everyone,
My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment.
Problem 1: Simulating Eye Tracking in Simulator
We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro?
Problem 2: WebSocket-triggered "Click" doesn't work outside the app
We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works.
We suspect this is due to sandbox restrictions or lack of system-level permissions.
Is it possible in anyway to:
Trigger interaction events outside the app using custom input (like BCI via Websocket)?
Access system-wide click/tap simulation APIs from within VisionOS apps
Integrate this with accessibility services (like Voice Control or AssistiveTouch)
We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS.
Thanks in advance for any insight you can provide!
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Xcode
Accessibility
iPad and iOS apps on visionOS
I am currently trying to get my app ready for full external keyboard support, while testing I found an issue with the native DatePicker.
Whenever I enter the DatePicker with an external keyboard it only jumps to the time picker and I am not able to move away from it. Arrow keys don't work, tab and control + tab only move me to the toolbar and back.
This is how they look like
private var datePicker: some View {
DatePicker(
"",
selection: date,
in: minDate...,
displayedComponents: [.date]
)
.fixedSize()
.accessibilityIdentifier("\(datePickerLabel).DatePicker")
}
private var timePicker: some View {
DatePicker(
"",
selection: date,
in: minDate...,
displayedComponents: [.hourAndMinute]
)
.fixedSize()
.accessibilityIdentifier("\(datePickerLabel).TimePicker")
}
private var datePickerLabelView: some View {
Text(datePickerLabel.localizedString)
.accessibilityIdentifier(datePickerLabel)
}
And we implement it like this in the view:
HStack {
datePickerLabelView
Spacer()
datePicker
timePicker
}
Does anyone know how to fix this behavior? Is it our fault or is it the system? The issue comes up both in iOS 18 and 26.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
External Accessory
iOS
Accessibility
SwiftUI
Updated to iOS 26 beta and now the TV remote app in the control center won’t open. I’ve tried the following:
Restart phone
Remove shortcut and re-add
Cant find any other troubleshooting methods for this issue online so I’m guessing it’s a new problem.
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m developing an iOS app, and I’ve noticed that when the user enables Accessibility → Display & Text Size → Color Filters → Grayscale, my app icon loses a lot of visual contrast. The original colored version looks fine, but in grayscale it appears “flat” and harder to distinguish, unlike a pure black-and-white design.
What I want to achieve:
Ensure the app icon remains visually clear and high-contrast even when iOS renders it in grayscale.
Ideally, provide an alternate “high-contrast” app icon version when grayscale mode is enabled.
What I’ve tried:
Increased color contrast in the original icon design.
Added outlines and stronger shapes.
Tested with grayscale filters in design tools.
Researched Asset Catalog and alternate icons, but found no documented API to detect or respond to grayscale mode.
Questions:
Is there any API in iOS that allows detecting when the system is in grayscale mode so that I can programmatically switch to an alternate app icon?
If not, are there Apple-recommended best practices for designing app icons that still look clear in grayscale?
Are there any accessibility guidelines specifically addressing icon design for grayscale or color-blind modes?
Additional info:
iOS version tested: iOS 17.5
Development in Swift + SwiftUI, using Asset Catalog for icons.
I am aware that iOS supports alternate icons via setAlternateIconName, but I haven’t found a trigger for grayscale mode.
I’m trying to enroll in the Apple Developer Program as an individual. I’ve gone through the steps on the website and started the purchase process. However, after a couple of days when I return to the site, it doesn’t remember my progress — I have to start the enrollment from scratch every time.
Is this expected behavior? Am I missing a step to save my progress or complete the enrollment properly?
Any help or guidance would be appreciated. Thank you!
I'm developing a document editor for macOS using AppKit, which supports structured content such as titles and multiple heading levels—similar to what you see in the Pages app.
I'm looking for a way to programmatically mark a specific substring within an NSTextView as a heading, so that VoiceOver can recognize it and announce it appropriately (e.g., by saying “heading” before reading the text). This would be similar in spirit to how NSAccessibilityLinkTextAttribute works for links.
Is there an existing accessibility text attribute or recommended approach to achieve this behavior for headings? If not, I’d appreciate any guidance or suggestions on how best to implement this in a VoiceOver-friendly way.
Thank you in advance for your help!
Best regards,
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello Albert!
I am experiencing some strange bugs around DeviceActivityEvents (part of the DeviceActivity framework) on iOS 26 / iOS 26.1 / iOS 26.2 beta:
When creating a DeviceActivityEvent we can assign a threshold and applicationTokens.
The idea is, that after the user has spent said threshold on said apps, eventDidReachThreshold() is called.
The property includesPastActivity is set to false.
On iOS 26 however, it happens (quite reliably after updating to a new beta seed) quite often that eventDidReachThreshold() is called immediately (after a couple of seconds) instead of waiting for the threshold to be met.
Is anyone else seeing similar issues on iOS 26 / iOS 26.1 / iOS 26.2 beta?
Only workaround I have found is to ask users to revoke and re-grant Screen Time permissions. This only holds for about two weeks though or at most until the next iOS 26 beta update is installed, so it is not a permanent solution unfortunately.
Feedback (incl. sysdiagnoses and sample project) is filed under:
FB18061981
FB18927456
One of our users has filed their own feedback request as well:
FB20817853
Thanks a lot for any help on this!
Double-tap three fingers and drag to change zoom” should suppress “Three Finger to Drag”. Currently these gestures are triggered simultaneously, for no real reasons. I saw different behaviors for different environments, but none is desired.
Current and desired behavior:
This seems an issue so I filed a feedback.
Hi,
On iOS, I'd like to mark views that are inside a LazyVStack as headers for VoiceOver (make them appear in the headings rotor).
In a VStack, you just have add .accessibilityAddTraits(.isHeader) to your header view. However, if your view is in a LazyVStack, that won't work if the view is not visible. As its name implies, LazyVStack is lazy so that makes sense.
There is very little information online about system rotors, but it seems you are supposed to use .accessibilityRotor() with the headings system rotor (.accessibilityRotor(.headings)) outside of the LazyVStack. Something like the following.
.accessibilityRotor(.headings) {
ForEach(entries) { entry in
// entry.id must be the same as the id of the SwiftUI view it is about
AccessibilityRotorEntry(entry.name, id: entry.id)
}
}
It kinds of work, but only kind of. When using .accessibilityAddTraits(.isHeader) in a VStack, the view is in the headings rotor as soon as you change screen. However, when using .accessibilityRotor(.headings), the headers (headings?) are not in the headings rotor at the time the screen appears. You have to move the accessibility focus inside the screen before your headers show up.
I'm a beginner in regards to VoiceOver, so I don't know how a blind user used to VoiceOver would perceive this, but it feels to me that having to move the focus before the headers are in the headings rotor would mean some users would miss them.
So my question is: is there a way to have headers inside a LazyVStack (and are not necessarily visible at first) to be in the headings rotor as soon as the screen appears? (be it using .accessibilityRotor(.headings) or anything else)
The "SwiftUI Accessibility: Beyond the basics" talk from WWDC 2021 mentions custom rotors, not system rotors, but that should be close enough. It mentions that for accessibilityRotor to work properly it has to be applied on an accessibility container, so just in case I tried to move my .accessibilityRotor(.headings) to multiple places, with and without the accessibilityElement(children: .contain) modifier, but that did not seem to change the behavior (and I could not understand why accessibilityRotor could not automatically make the view it is applied on an accessibility container if needed).
Also, a related question: when using .accessibilityRotor(.headings) on a screen, is it fine to mix uses of .accessibilityRotor(.headings) and .accessibilityRotor(.headings)? In a screen with multiple type of contents (something like ScrollView { VStack { MyHeader(); LazyVStack { /* some content */ }; LazyVStack { /* something else */ } } }), having to declare all headers in one place would make code reusability harder.
Thanks
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26
Steps: Open the path above → “Hattori” is not listed and cannot be downloaded
Expected: Hattori is available to download and select
Actual: Hattori is absent from the catalog
Regression: Was available on iOS 18.x on the same device