Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
AudioComponentDescription description;
guint i, i_stop;
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
/* instruments */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected.
Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers)
Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers)
When using paired input and output devices:
The setup works as expected.
Example: MacBook Pro microphone → MacBook Pro speakers.
When using mismatched devices:
AVAudioEngine setup fails during aggregate device construction.
Example: AirPods microphone → MacBook Pro speakers.
Error logs indicate a channel count mismatch.
Here are the partial logs. Due to the content limit, I cannot post the entire logs.
AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875)
AUVPAggregate.cpp:1036 err=-10875
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875
AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
Is it possible to use voice processing with different input/output devices?
If yes, are there any specific configurations required to handle mismatched devices?
How can we resolve channel count mismatch errors during aggregate device construction?
Are there settings or API adjustments to enforce compatibility between input/output devices?
Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices?
For instance, can we force an intermediate channel configuration or downmix input/output formats?
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 .
the issue:
when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise.
a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open.
I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
format:av_format
maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio engine start - %d", [ns_error code]);
}
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
/* pre sync */
/* IO buffers */
av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
/* fill input buffer */
/* schedule av input buffer */
frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
[av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
/* render */
ns_error = NULL;
status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("render offline error - %d", [ns_error code]);
}
}
regards, Joël
Hi,
I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations.
Is there any way to have more information about these assertions and why they are happening ?
Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy
Here are the logs shown in the console:
Filtering the log data using "process == "coreaudiod""
Timestamp Thread Type Activity PID TTL
2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes.
Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I'd like to find out: Can backgrounded apps record audio?
In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything.
However I'm not familiar with the current state of affairs.
With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone?
Thanks.
Topic:
Media Technologies
SubTopic:
Audio
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation.
During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case.
I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums.
This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file:
Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D
Language: es-MX
and on a second device the same personal voice is in a different language:
Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D
Language: zh-CN
Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese.
I hope someone can look into this.
Hi,
our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings.
While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate.
Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this?
Thanks!
Topic:
Media Technologies
SubTopic:
Audio
Tags:
System Configuration
Core Audio
Inter-process communication
Service Management
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely.
Any ideas what could be going wrong?
Here's a minimum code example to demonstrate:
#include <AudioToolbox/AudioToolbox.h>
#include <stdint.h>
#define RENDER_BUFFER_COUNT 2
#define RENDER_FRAMES_PER_BUFFER 128
// mono linear PCM audio data at 48kHz
#define RENDER_SAMPLE_RATE 48000
#define RENDER_CHANNEL_COUNT 1
#define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32))
void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount)
{
static bool isInverted = false;
float scalar = isInverted ? -1.f : 1.f;
for (uint32_t frame = 0; frame < frameCount; ++frame)
{
for (uint32_t channel = 0; channel < channelCount; ++channel)
{
// series of ramps, alternating up and down.
outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount);
}
}
isInverted = !isInverted;
}
AudioStreamBasicDescription coreAudioDesc = { 0 };
AudioQueueRef coreAudioQueue = NULL;
AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL };
void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer)
{
// 0's here indicate no fancy packet magic
AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
int main(void)
{
const UInt32 BytesPerSample = sizeof(float);
coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE;
coreAudioDesc.mFormatID = kAudioFormatLinearPCM;
coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mFramesPerPacket = 1;
coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT;
coreAudioDesc.mBitsPerChannel = BytesPerSample * 8;
coreAudioQueue = NULL;
OSStatus result;
// most of the 0 and NULL params here are for compressed sound formats etc.
result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue);
if (result != noErr)
{
assert(false == "AudioQueueNewOutput failed!");
abort();
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER;
result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i]));
if (result != noErr)
{
assert(false == "AudioQueueAllocateBuffer failed!");
abort();
}
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT);
coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity;
AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0);
}
AudioQueueStart(coreAudioQueue, NULL);
sleep(10); // some time to hear the audio
AudioQueueStop(coreAudioQueue, true);
AudioQueueDispose(coreAudioQueue, true);
return 0;
}
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc.
I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success.
Does anyone know how I can customize the media controls of ApplicationMusicPlayer.
Thank you.
Hello Apple Developer Community,
I am seeking clarification on the intended display behavior of HLS audio tracks within the iOS 26 (or current beta) native player, specifically concerning the NAME and LANGUAGE attributes of the EXT-X-MEDIA tag.
In our HLS manifests, we define alternative audio tracks using EXT-X-MEDIA tags, like so:
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-1",DEFAULT=YES,AUTOSELECT=YES,URI="audio_ja.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-2",URI="audio_en.m3u8"
Our observation is that when an audio track is selected and its name is displayed in the native iOS media controls (e.g., Control Center or within a full-screen video player's UI), the value specified in the NAME attribute ("AUDIO-1", "AUDIO-2") does not seem to be used. Instead, the display appears to derive from the LANGUAGE attribute ("ja", "en"), often showing the system's localized string for that language (e.g., "Japanese", "English").
We would like to understand the official or intended behavior regarding this.
Is it the expected behavior for the iOS native player to prioritize the LANGUAGE attribute (or its localized equivalent) over the NAME attribute for displaying the selected audio track's label?
If this is the intended design, what is the recommended best practice for developers who wish to present a custom, human-readable name for audio tracks (beyond the standard language name) in the native iOS UI?
Are there any specific AVPlayer properties or AVMediaSelectionOption considerations that would allow more granular control over this display, or is this entirely managed by the system based on the LANGUAGE attribute?
Any insights or official guidance on this behavior in iOS 26 (and potentially previous versions) would be greatly appreciated.
Thank you for your time and assistance.
When using the [AVAudioSession setCategory:withOptions:error:] API, the call hangs for a long time and eventually returns an error.This issue occurs on iOS 16, and did not appear in earlier versions.
Thread 135:
0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib)
1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib)
2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib)
3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib)
4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib)
5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib)
6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation)
7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation)
8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation)
9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation)
10 AudioSession 0x00000001c77498b0 __ZN4avas6client11SessionCore10HandlePingEv :192 (in AudioSession)
11 AudioSession 0x00000001c77497b0 ____ZN4avas6client11SessionCore12DispatchPingEv_block_invoke :52 (in AudioSession)
12 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib)
13 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib)
14 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib)
15 libdispatch.dylib 0x00000001d63edf78 __dispatch_lane_invoke :440 (in libdispatch.dylib)
16 libdispatch.dylib 0x00000001d63f6f48 __dispatch_root_queue_drain :364 (in libdispatch.dylib)
17 libdispatch.dylib 0x00000001d63f6d08 __dispatch_worker_thread :268 (in libdispatch.dylib)
18 libsystem_pthread.dylib 0x00000001f9ff144c __pthread_start :136 (in libsystem_pthread.dylib)
19 libsystem_pthread.dylib 0x00000001f9fed8cc _thread_start :8 (in libsystem_pthread.dylib)
Thread 132:
0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib)
1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib)
2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib)
3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib)
4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib)
5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib)
6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation)
7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation)
8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation)
9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation)
10 AudioSession 0x00000001c7754198 __ZNK4avas6client11SessionCore18SetBatchPropertiesEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectEPU15__autoreleasingP7NSArrayIPS2_IS4_P8NSNumberEENS_30AVAudioSessionBatchSetStrategyEbb :548 (in AudioSession)
11 AudioSession 0x00000001c7753e58 __ZNK4avas6client11SessionCore20SetBatchPropertiesMXEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectE :92 (in AudioSession)
12 AudioSession 0x00000001c775179c __ZN4avas6client11SessionCore11setCategoryEP8NSStringS3_32AVAudioSessionRouteSharingPolicym :472 (in AudioSession)
13 AudioSession 0x00000001c7768f88 -[AVAudioSession setCategory:withOptions:error:] :68 (in AudioSession)
14 AlipayWallet 0x000000010140580c -[AVAudioSession(APMHook) apmhook_setCategory:withOptions:error:] APMHookAudioSession.m:35 (in AlipayWallet)
15 AlipayWallet 0x00000001014001a4 -[APMAudioSessionManager resume] APMAudioSessionManager.m:718 (in AlipayWallet)
16 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib)
17 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib)
18 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib)
19 libdispatch.dylib 0x00000001d63edf44 __dispatch_lane_invoke :388 (in libdispatch.dylib)
20 libdispatch.dylib 0x00000001d63f83ec __dispatch_root_queue_drain_deferred_wlh :292 (in libdispatch.dylib)
21 libdispatch.dylib 0x00000001d63f7ce4 __dispatch_workloop_worker_thread :692 (in libdispatch.dylib)
22 libsystem_pthread.dylib 0x00000001f9fee3b8 __pthread_wqthread :292 (in libsystem_pthread.dylib)
23 libsystem_pthread.dylib 0x00000001f9fed8c0 _start_wqthread :8 (in libsystem_pthread.dylib)
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already:
• Display detailed scrolling waveforms of Apple Music songs
• Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions:
1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content?
2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access?
3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?