Audio Export
This guides shows how to use alphaTab to generate raw audio samples for exporting purposes. The export feature serves following use cases:
-
Allow users to download an audio file of the song.
If the audio is passed additionally to an audio codec like MP3 or OGG Vorbis, users can save the audio of the music sheet to disk. This is a quite common feature offered to users. -
Use the raw audio for synchronization with external systems.
You app might have its own mechanisms to provide media playback. Your app might have additional custom backing tracks or you want to splitup the individual audio tracks to play on separate output devices. By pre-computing the audio samples from the synthesizer you can build an external system which combines the alphaTab audio with any custom components.
The external system can then be combined this with the Audio & Video Sync feature to still have an interactive music sheet that shows correctly what's being played.
The audio export can be used regardless of the current mode the alphaTab player is in. This allows exporting audio even if an external audio backing track or video is used. Just be sure to pass in the required soundfont in this case. If a synthesizer is already active, it can reuse the already loaded soundfont.
How to use this?
The audio exporter follows an asynchronous pull pattern:
- async because the exporter uses
Promises
(Task
for C#,Deferred
for Kotlin) to provide a clean way of requesting audio data without fighting with callbacks or events. - pull because you request the next chunk of audio to be generated and pull the audio into your consumer code.
To export the audio you follow tree main steps:
- You start a new exporter with
await api.exportAudio(...)
. - You call
exporter.render(<milliseconds>)
to produce a chunk of audio which you can then process further. (repeated until end is reached). - You cleanup the exporter via
exporter.destroy()
. The exporter also implementsDisposable
(IDisposable
for C#,AutoCloseable
forKotlin
) which allows easy cleanup via language features if supported.
The raw audio samples for a whole song can consume quite a huge amount of memory: A calculation example:
- 4 bytes per sample (32-bit float samples)
- 2 audio channels (left and right for stereo sound)
- 44100 samples per second
A 1 minute song already needs ~21MB of memory (60s * 4bytes * 2channels * 44100samples/s
), multiply accordingly.
To keep the memory pressure low, you might send the chunks into a 3rd party library encoding the audio in a smaller format (e.g. MP3 or OGG Vorbis).
Available options
The AudioExportOptions
allow customizing various aspects of the audio exported:
soundFonts
can be used to customize the soundfonts used during export.sampleRate
can be used to customize the sample rate of the exported audio.useSyncPoints
controls whether the sync points of the currently loaded song are appled during audio generation.masterVolume
controls the master volume of the generated audio.metronomeVolume
controls the volume of the metronome ticks. (keep in mind that the use ofuseSyncPoints
changes the audio duration, the metronome is aligned with the music notes, not with the synthesized audio)playbackRange
controls the audio range which is exported.trackVolume
controls the volume of every track (percentage-wise to the already configured absolute volume)trackTranspositionPitches
controls an additional transposition pitch for the tracks.
Example
This example exports the audio and creates a WAV File out of the samples. WAV files contain the raw samples, we just need
to write the correct file header. In this demo we then create a Blob URL in the browser to set the WAV file as source of an <audio />
tag to play it.
async function generateWav() {
try {
// setup options
const options = new alphaTab.synth.AudioExportOptions();
options.masterVolume = 0.8;
options.metronomeVolume = 1;
options.trackVolume.set(0, 0.5);
options.sampleRate = 44100;
// start exporter
const exporter = await api!.exportAudio(options);
// collect all chunks in 500ms steps
const chunks: Float32Array[] = [];
try {
while (true) {
const chunk = await exporter.render(500);
if (chunk === undefined) {
break;
}
chunks.push(chunk.samples);
}
} finally {
exporter.destroy();
}
// use the samples.
// in this example: create a wav file and set it as source of the audio element
const wav = convertSamplesToWavBlobUrl(chunks, options.sampleRate);
// <audio id="wav" controls="true" />
document.querySelector<HTMLAudioElement>('#wav')!.src = wav;
}
catch (e) {
console.error('Error during export', e);
alert('Something went wrong during export, check the browser logs for details');
}
}
function convertSamplesToWavBlobUrl(chunks: Float32Array[], sampleRate: number): string {
const samples = chunks.reduce((p, c) => p + c.length, 0);
const wavHeaderSize = 44;
const fileSize = wavHeaderSize + samples * 4;
const buffer = alphaTab.io.ByteBuffer.withCapacity(fileSize);
//
// write wav header
// RIFF chunk
buffer.write(new Uint8Array([0x52, 0x49, 0x46, 0x46]), 0, 4); // RIFF
alphaTab.io.IOHelper.writeInt32LE(buffer, fileSize - 8); // file size
buffer.write(new Uint8Array([0x57, 0x41, 0x56, 0x45]), 0, 4); // WAVE
// format chunk
buffer.write(new Uint8Array([0x66, 0x6D, 0x74, 0x20]), 0, 4); // fmt␣
alphaTab.io.IOHelper.writeInt32LE(buffer, 16); // block size
alphaTab.io.IOHelper.writeInt16LE(buffer, 3); // audio format (1=WAVE_FORMAT_IEEE_FLOAT)
const channels = 2;
alphaTab.io.IOHelper.writeInt16LE(buffer, channels); // number of channels
alphaTab.io.IOHelper.writeInt32LE(buffer, sampleRate); // sample rate
alphaTab.io.IOHelper.writeInt32LE(buffer, Float32Array.BYTES_PER_ELEMENT * channels * sampleRate); // bytes/second
const bitsPerSample = Float32Array.BYTES_PER_ELEMENT * 8;
alphaTab.io.IOHelper.writeInt16LE(buffer, channels * Math.floor((bitsPerSample + 7) / 8)); // block align
alphaTab.io.IOHelper.writeInt16LE(buffer, bitsPerSample); // bits per sample
// data chunk
buffer.write(new Uint8Array([0x64, 0x61, 0x74, 0x61]), 0, 4); // data
alphaTab.io.IOHelper.writeInt32LE(buffer, samples * 4);
for (const c of chunks) {
const bytes = new Uint8Array(c.buffer, c.byteOffset, c.byteLength);
buffer.write(bytes, 0, bytes.length);
}
const blob: Blob = new Blob([
buffer.toArray()
], {
type: 'audio/wav'
});
return URL.createObjectURL(blob);
}