I'm trying to use the new Speech framework for streaming transcription on macOS 26.3, and I can reproduce a failure with SpeechAnalyzer.start(inputSequence:).
What is working:
SpeechAnalyzer + SpeechTranscriber
offline path using start(inputAudioFile:finishAfterFile:)
same Spanish WAV file transcribes successfully and returns a coherent final result
What is not working:
SpeechAnalyzer + SpeechTranscriber
stream path using start(inputSequence:)
same WAV, replayed as AnalyzerInput(buffer:bufferStartTime:)
fails once replay starts with:
_GenericObjCError domain=Foundation._GenericObjCError code=0 detail=nilError
I also tried:
DictationTranscriber instead of SpeechTranscriber
no realtime pacing during replay
Both still fail in stream mode with the same error.
So this does not currently look like a ScreenCaptureKit issue or a Python integration issue. I reduced it to a pure Swift CLI repro.
Environment:
macOS 26.3 (25D122)
Xcode 26.3
Swift 6.2.4
Apple Silicon Mac
Has anyone here gotten SpeechAnalyzer.start(inputSequence:) working reliably on macOS 26.x?
If so, I'd be interested in any workaround or any detail that differs from the obvious setup:
prepareToAnalyze(in:)
bestAvailableAudioFormat(...)
AnalyzerInput(buffer:bufferStartTime:)
replaying a known-good WAV in chunks
I already filed Feedback Assistant:
FB22149971
Audio
RSS for tagIntegrate music and other audio content into your apps.
Posts under Audio tag
60 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have exhausted standard debugging approaches and need guidance on Final Cut Pro's AU plugin loading behavior.
PLUGIN OVERVIEW
My plugin is an AUv2 audio plugin built with JUCE 8.
The plugin loads and functions correctly in:
Pro Tools (AAX)
Media Composer (AAX)
Reaper (AU + VST3)
Logic Pro (AU)
GarageBand (AU)
Audacity (AU)
DaVinci Resolve / Fairlight (AU)
Harrison Mixbus (AU)
Ableton Live (AU)
Cubase (VST3)
Nuendo (VST3)
It does NOT load in Final Cut Pro (tested on 10.8.x, macOS 14.6 and 15.2).
DIAGNOSTICS COMPLETED
auval passes cleanly:
auval -v aufx Hwhy Manu → AU VALIDATION SUCCEEDED
Plugin is notarized and stapled:
xcrun stapler validate → The validate action worked
spctl --assess --type exec → Note: returns 'not an app' which we understand is expected for .component bundles.
Hardened Runtime is enabled on the bundle.
We identified that our Info.plist contained a 'resourceUsage' dictionary in the AudioComponents entry. We found via system logs that this was setting au_componentFlags = 2 (kAudioComponentFlag_SandboxSafe), causing FCP to attempt loading the plugin in-process inside its sandbox, where our UDP networking is denied. We removed the resourceUsage dict, confirmed au_componentFlags = 0 in the CAReportingClient log, and FCP now loads the plugin out-of-process via AUHostingServiceXPC_arrow.
Despite au_componentFlags = 0 and out-of-process loading confirmed, the plugin still does not appear in FCP's effects browser.
We also identified and fixed a channel layout issue — our isBusesLayoutSupported was not returning true for 5-channel layouts (which FCP uses internally). This is now fixed.
AU cache has been fully cleared:
~/Library/Caches/com.apple.audio.AudioComponentRegistrar
/Library/Caches/com.apple.audio.AudioComponentRegistrar
coreaudiod and AudioComponentRegistrar killed to force rescan
auval -a run to force re-registration
Topic:
Media Technologies
SubTopic:
Video
Tags:
Audio
Professional Video Applications
Media
AudioUnit
I'm using MusicKit for DRM track playback in my iOS app and a third party library to play local user-owned music on the file system and from the music library.
This app is also supporting accessory devices that offer Bluetooth remote media control.
The wish is to achieve parity between how the remote interacts with user owned music and the DRM / cloud / Apple Music tracks in my application music player.
Track navigation, app volume (rather than system volume), and scrubbing need to work consistently on a mix of tracks which could alternate DRM and cloud status within one album or playlist.
Apple Music queue and track pickers are not useful tools in my app.
How can I support playing DRM and Apple Music tracks while not surrendering the remote control features to the system?
[Note: this issue was happening on a main testing device, and after testing the same code on other devices, this issue is only happening on 1 out of 4 devices]
We are successfully getting a MusicCatalogResourceResponse for every song ID where we make the MusicCatalogResourceRequest. We are able to display the song title, artist name, and album artwork for each Song in the response.
However - when we go to play the song, there are some songs that play, and several songs that do not play. For the songs that don't play, the console shows “Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}>”
let musicPlayer = ApplicationMusicPlayer.shared
func playSong(_ song: Song) {
musicPlayer.queue = [song]
Task {
try await musicPlayer.prepareToPlay()
try await musicPlayer.play()
}
Is there anything else we can investigate about what may be causing specific song IDs not to play on this specific device?
Even if we remove line 6 musicPlayer.prepareToPlay() we still see the same console error when running playSong with the Songs that don't work.
It is always the same song IDs that we can play and always the same song IDs that we cannot get to play, even trying them across different projects with different bundle identifiers.
We can tap to play a song that works, and it starts playing immediately. Then tap a song that doesn't work, and nothing happens. Then back to a song that works. It's consistent which songs succeed and fail on this device.
Perhaps there is an issue specific to this very iPad when it comes to certain specific songs, but we'd like to be confident that an app relying on MusicKit will be able to play songs that have been successfully loaded with a MusicCatalogResourceResponse.
Thanks for any help or suggestions about what we may be able to investigate further on the device or what we should consider when launching an app that expects anyone with Apple Music to be able to listen to any of the songs loaded by the app.
Specific iPad details: iPad Pro (12.9-inch) (6th generation) running iPadOS 26.4 Beta
Two of the song IDs that won't play on this iPad (even though we can access and display their album artwork and all other information): 943204000 and 1441164805
Hello,
I am a Bluetooth Engineer at Google investigating an interoperability bug between an Android device and AirPods 4. When requesting an L2CAP connection (with PSM = AVDTP) to the AirPods during SDP service discovery, The AirPods L2CAP layer incorrectly responds with a "refused - no resources available" status followed by a Pending status and a Success status. This violates the specification, which says that the request has been fully rejected after the refused status and should not receive followup responses. I suspect the "no resources available" response is a bug. This prevents A2DP from working with the AirPods.
This bug does not exist with AirPods 2 firmware.
Here is a packet capture:
1602 1969-12-31 16:07:04.805261 0.062473 localhost () Apple_6b:db:09 (AirPods) L2CAP 17 Sent Connection Request (AVDTP, SCID: 0x22c6)
1603 1969-12-31 16:07:04.810953 0.005692 controller host HCI_EVT 8 Rcvd Number of Completed Packets
1604 1969-12-31 16:07:04.811078 0.000125 Apple_6b:db:09 (AirPods) localhost () SDP 27 Rcvd Service Search Attribute Request : Device Information: [Bluetooth Profile Descriptor List 0x0009]
1605 1969-12-31 16:07:04.821249 0.010171 localhost () Apple_6b:db:09 (AirPods) SDP 19 Sent Service Search Attribute Response
1606 1969-12-31 16:07:04.876396 0.055147 controller host HCI_EVT 8 Rcvd Number of Completed Packets
1607 1969-12-31 16:07:04.876464 0.000068 Apple_6b:db:09 (AirPods) localhost () L2CAP 21 Rcvd Connection Response - Refused - no resources available (SCID: 0x22c6)
1608 1969-12-31 16:07:04.942539 0.066075 Apple_6b:db:09 (AirPods) localhost () SDP 41 Rcvd Service Search Attribute Request : Unknown: [Bluetooth Profile Descriptor List 0x0009]
1609 1969-12-31 16:07:04.951052 0.008513 localhost () Apple_6b:db:09 (AirPods) SDP 19 Sent Service Search Attribute Response
1610 1969-12-31 16:07:05.010605 0.059553 controller host HCI_EVT 8 Rcvd Number of Completed Packets
1611 1969-12-31 16:07:05.080593 0.069988 Apple_6b:db:09 (AirPods) localhost () SDP 27 Rcvd Service Search Attribute Request : GATT: [Bluetooth Profile Descriptor List 0x0009]
1612 1969-12-31 16:07:05.087636 0.007043 localhost () Apple_6b:db:09 (AirPods) SDP 19 Sent Service Search Attribute Response
1613 1969-12-31 16:07:05.209417 0.121781 controller host HCI_EVT 8 Rcvd Number of Completed Packets
1614 1969-12-31 16:07:05.279491 0.070074 Apple_6b:db:09 (AirPods) localhost () L2CAP 21 Rcvd Connection Response - Pending (SCID: 0x22c6)
1615 1969-12-31 16:07:05.280731 0.001240 Apple_6b:db:09 (AirPods) localhost () L2CAP 21 Rcvd Connection Response - Success (SCID: 0x22c6, DCID: 0x0406)
Please file this bug with the AirPods Bluetooth team.
I've got a streaming Radio app that loads an HLS stream into an AVAudioPlayer. I've set up an Intents extension that notifies SiriKit that my app must handle the INPlayMediaIntent in app, and, I'm able to successfully initiate the stream playing from my phone using the string "Play ".
My intent handler in app looks like this:
completionHandler(INPlayMediaIntentResponse(code: .success, userActivity: nil))
DispatchQueue.main.async {
AudioPlayerService.shared.play()
}
The Audio Player service, in its init, does the following:
try AVAudioSession.sharedInstance().setCategory(
.playback,
mode: .default,
policy: .longFormAudio
)
Additionally, in my Info.plist, I have the AirPlay optimization policy set to Long Form Audio.
Having said all that, when I try to route my app to play "on a given HomePod speaker" ("play on ") the speaker routing instructions are never followed. I've looked and not been able to find where I might be able to instruct my app to follow the correct path here. I was assuming I could not trigger this behavior manually, as I believe I don't really have any control over AirPlay routing.
Is there any guidance for working with SiriKit to do the right thing with regards to audio routing?
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Audio
AVAudioSession
SiriKit
AirPlay 2
Hello,
I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage.
For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using:
AVSpeechSynthesizer.write(_:toBufferCallback:)
Converting the received AVAudioPCMBuffer buffers into audio data
Storing the audio inside the app sandbox
Playing it back using AVAudioPlayer / AVAudioEngine
The cached audio is:
Generated fully on-device using system voices
Stored only inside the app’s private container
Used only for internal playback controls (timeline, seek, skip ±5 seconds)
Never shared, exported, uploaded, or exposed outside the app
The alternative approaches would be:
Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point
Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach
I have reviewed the current iOS Software License Agreement:
https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf
In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt:
“…use … only for your personal, non-commercial use…
No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.”
I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app.
My question is:
Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage?
If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact.
Thank you.
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
I need to apply headphone-specific scenario only when headphones are the sole active playback device in my iOS audio app.
Problem that there is no absolute way to definitively understand that headphones are the sole active playback device
AVAudioSession.currentRoute.outputs portTypes don't guarantee headphones:
let session = AVAudioSession.sharedInstance()
let outputs = session.currentRoute.outputs
let headphonesOnly = outputs.count == 1 &&
(outputs.first?.portType == .headphones ||
outputs.first?.portType == .bluetoothA2DP ||
outputs.first?.portType == .bluetoothHFP ||
outputs.first?.portType == .bluetoothLE)
The issue in code above that listed bluetooth profiles (A2DP, HFP, LE) can be used by any audio device, not only headphones
Is there any public API on iOS that can:
Distinguish Bluetooth headphones vs Bluetooth speakers when both use A2DP/LE?
Expose the user’s “Device Type” classification (headphones / speaker / car stereo, etc.) that is shown in Settings → Bluetooth → Device Type?
Provide a more reliable way to know “this route is definitely headphones” for A2DP devices, beyond portType and portName string heuristics?
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem.
I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
I'm implementing the PushToTalk framework and have encountered an issue where channelManager(_:didActivate:) is not called under specific circumstances.
What works:
App is in foreground, receives PTT push → didActivate is called ✅
App receives audio in foreground, then is backgrounded → subsequent pushes trigger didActivate ✅
What doesn't work:
App is launched, user joins channel, then immediately backgrounds
PTT push arrives while app is backgrounded
incomingPushResult is called, I return .activeRemoteParticipant(participant)
The system UI shows the speaker name correctly
However, didActivate is never called
Audio data arrives via WebSocket but cannot be played (no audio session)
Setup:
Channel joined successfully before backgrounding
UIBackgroundModes includes push-to-talk
No manual audio session activation (setActive) anywhere in my code
AVAudioEngine setup only happens inside didActivate delegate method
Issue persists even after channel restoration via channelDescriptor(restoredChannelUUID:)
Question:
Is this expected behavior or a bug? If expected, what's the correct approach to handle
incoming PTT audio when the app is backgrounded and hasn't received audio while in the
foreground yet?
Hi everyone,
I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms.
Problem:
When the app is recording audio and an interruption occurs:
I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began).
On .ended, I check for .shouldResume and call audioRecorder?.record() again.
The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder.
Repro:
Start a recording with AVAudioRecorder
Simulate a system interruption (e.g., incoming call)
Resume recording after the interruption
Stop and inspect the output audio file
Expected: Full audio (before and after interruption) should be saved.
Actual: Only the audio after interruption is saved; the earlier part is missing
Notes:
According to the documentation, calling .record() after .pause() should resume recording into the same file.
I confirmed that the file URL does not change, and I do not recreate the recorder instance.
No error is thrown by the system during this process.
This behavior happens consistently when the app is interrupted and resumed.
Question:
Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen?
Thanks in advance!
Hi, everyone, I downloaded the source code EditingSpatialAudioWithAnAudioMix.zip from https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix, when I carried out one of the actions named "process" in command line the program crashed!!
Form the source code, I found that the value of componentType is set to kAudioUnitType_FormatConverter:
// The actual `AudioUnit`.
public var auAudioMix = AVAudioUnitEffect()
init() {
// Generate a component description for the audio unit.
let componentDescription = AudioComponentDescription(
componentType: kAudioUnitType_FormatConverter,
componentSubType: kAudioUnitSubType_AUAudioMix,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
auAudioMix=AVAudioUnitEffect(audioComponentDescription: componentDescription)
}
But in the document from https://developer.apple.com/documentation/avfaudio/avaudiouniteffect/init(audiocomponentdescription:), it seems that componentType can not be set to kAudioUnitType_FormatConverter and :
Has everyone encountered this problem?
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do?
Thanks,
Gunek
FaceTime’s screen-share audio balance is insanely absurd right now. Whenever I share media, the system audio that gets sent through FaceTime is a tiny whisper even at full volume (or even when connected to my speaker or headphones). The moment anyone on the call makes any noise at all, the shared audio ducks so hard it disappears, while the voice (or rustling or air conditioning noise) spikes to painful levels. It’s impossible to watch or listen to anything together. Also, the feature where FaceTime would shrink to a square during screen-sharing has been completely removed. That was a good feature and I'm really confused why it's gone. Now, the FaceTime window stays as a long rectangle that covers part of the content I'm trying to share (unless I do full screen tile, but then I can't pull up any other windows during the call) and can't be made smaller than about a third of the screen. You can't resize the window or adjust its dimensions, so it ends up blocking the actual media you're trying to watch.
Here are some feature requests/fixes that would greatly improve the FaceTime screen-share experience:
Option to adjust the shared media volume independently of call audio.
Disable/toggle the extreme automatic audio docking while screen-sharing
Reintroduce the minimized “floating square” mode or allow full manual resizing and repositioning of the FaceTime window during screen-share sessions.
Overall, this setup makes FaceTime screen-sharing basically unusable. The audio balance is so inconsistent that it’s easier to switch to Zoom or Google Meet, which both handle shared sound correctly and let you move the call window out of the way. Until these issues are fixed, there’s no practical reason to use FaceTime for shared viewing at all.
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
New to iOS development and I've been trying to make heads or tails of the documentation. I know there is a difference between the data fields returned from songs from the user library and from the category, but whenever I search on the apple site I can't find a list of each. For example, Im trying to get the releaseDate of a song in my library, but it seems I'll have to cross-query either the catalog entry for the using song.catalogID or the song.irsc but when I try to use them I can't find a cross reference between the two. I'm totally turned around.
Also trying to determine if a song in my library has been favorited or not? isFavorited (or something similar) doesn't seem to be a thing. Using this code and trying to find a way to display a solid star if the song has been favorited or an empty one if it's not. Seems like a basic request but I can't find anything on how to do it. I've searched docs, googled, tried.
Does apple want us to query the user's Favorited Songs playlist or something? How do I know which playlist that is?
I know isFavorited isn't a thing, just using it here so you can see what my intension is:
HStack(spacing: 10) {
Image(systemName: song.isFavorited ? "star.fill" : "star")
.foregroundColor(song.isFavorited ? .yellow : .gray)
Image(systemName: "magnifyingglass")
}