These notes are a mixture of guidelines, configuration details and setup recommendations for using Jack Audio Server on FreeBSD. Most of it is based on my experiments and findings while doing a rewrite of the Jack driver backend for FreeBSD. I am neither a sound engineer nor an expert on FreeBSD system internals though.
Requirements vary greatly between different use cases. It makes sense to be aware of what you want to achieve, before going into the details.
This is the simplest use case, given that there is no need to synchronize the output with external events. Latency settings can be relaxed, longer processing cycles and large buffers keep CPU load and system timing constraints at a minimum. If you have to synchronize with external events, see the section on live processing.
Usually recording also includes playback, to e.g. capture a guitar playing along a previously recorded drum track. Certainly you want the timing of the guitar track to match the drums. Still there is no need to go low latency, since audio software can compensate the latency while recording. But it’s essential to have stable latency, for multiple takes and also between recording sessions.
Note
|
While it’s technically possible to use different devices for recording and playback, this is not recommended. It may cause problems since the devices are not synchronized and run at a slightly different speed. |
Obviously you want to hear your instrument playing while recording. It is highly preferable to circumvent the software stack altogether and have an external monitoring solution in hardware. This could be a complete mixing desk or just some direct monitoring knobs on your audio interface. Going through the software stack will introduce noticeable latency.
If you have to do monitoring through Jack, use extra low latency settings. Maybe it helps to run Jack at realtime priority and setup the monitoring directly as connections on the Jack server, bypassing other Jack clients (untested).
This means use cases where low latency is crucial, like add some plugin effects to a live performance, emulating guitar amps or playing a software synthesizer. To be honest, I don’t think FreeBSD is the best tool for this job. Most hardware drivers set a lower limit to the duration of processing cycles, and thus to the lowest achievable latency.
Don’t do this unless your sounds are very timing insensitive. Or prove me wrong.
Note
|
Update:
The |
There seem to be only two drivers geared towards audio interfaces for music
creation and recording. The other pcm drivers (man sound
) do work with
Jack. But a soundblaster or internal sound card may have problems recording a
guitar or driving your studio headphones.
- snd_hdspe
-
RME HDSPe AIO and HDSPe RayDAT, professional PCI cards from RME.
- snd_uaudio
-
USB audio interfaces, most class compliant devices should be supported.
The RME cards work really well, with very low latency if desired. But the driver is somewhat limited. All channels are split into separate pairs, one pair of channels each pcm device. Means an ADAT connection of 8 channels at 48kHz is split into 4 pcm devices, with no direct way to access all 8 channels from Jack.
USB audio interfaces are more common, affordable, and come in many varieties
suitable for different setups. The snd_uaudio
driver implements the protocol
for generic USB audio class devices, so look out for a Class Compliant
device.
Tip
|
Class Compliant devices are often marketed as being "iPad compatible". |
There are limitations with USB audio devices though:
-
Some are not completely Class Compliant, only tested on Windows and Mac.
-
Some need USB quirks to get them running.
-
High sample rates with lots of channels may not be supported.
-
Hardware mixing and routing is typically not accessible in software.
In any case, snd_uaudio
works with audio blocks of length 2ms or more, which
places a lower boundary on the latency you can achieve — expect at least 5ms
input and 7ms output latency for Jack.
If your hardware is recognized correctly, it appears in the system logs as pcm device
# dmesg | grep pcm pcm0: <ATI R6xx (HDMI)> at nid 3 on hdaa0 pcm1: <Realtek ALC262 (Analog)> at nid 21 and 24,25,26 on hdaa1 pcm2: <Realtek ALC262 (Front Analog Headphones)> at nid 27 on hdaa1 pcm3: <USB audio> on uaudio0
and also by device driver
# dmesg | grep uaudio uaudio0 on uhub2 uaudio0: <Roland EDIROL UA-25EX, rev 1.10/1.00, addr 2> on usbus4 uaudio0: Play[0]: 48000 Hz, 2 ch, 24-bit S-LE PCM format, 2x2ms buffer. uaudio0: Record[0]: 48000 Hz, 2 ch, 24-bit S-LE PCM format, 2x2ms buffer. uaudio0: MIDI sequencer. pcm3: <USB audio> on uaudio0 uaudio0: No HID volume keys found.
Otherwise check the handbook on how to setup your sound card.
Most drivers don’t need any special treatment and work just fine. USB audio is
a bit different due to the variety of hardware and supported formats. Some
important settings are available at boot time (man snd_uaudio
):
snd_uaudio_load="YES" # (1) hw.usb.uaudio.default_channels="2" # (2) hw.usb.uaudio.default_bits="24" hw.usb.uaudio.default_rate="48000" hw.usb.quirk.0="0x0a4a 0xc150 0x0000 0xffff UQ_CFG_INDEX_1" # (3) hw.usb.quirk.1="0x0582 0x00e6 0x0000 0xffff UQ_AU_VENDOR_CLASS"
-
Force loading the driver, prerequisite for other settings.
-
Default number of channels, sample size and sample rate.
-
Quirks to make some incompatible devices work.
If a USB device supports multiple configurations, the driver will choose the
"best" one. You can make it prefer a different channel count, sample size and
sample rate by setting the defaults here. Quirks are needed when devices don’t
adhere to standards and only work with some special treatment. See man
usb_quirk
.
These are system-wide settings to manage sound devices. Sound devices are
numbered for identification, with an unnumbered alias /dev/dsp
which
represents the default device.
hw.snd.verbose=2 # (1) hw.snd.default_auto=0 # (2) hw.snd.default_unit=1 # (3)
-
Get more info from
/dev/sndstat
, recommended! -
Automatically assign the default sound device
/dev/dsp
. -
Manually set the default sound device
/dev/dsp
.
See man sound
for more details and possible values. The default sound device is
picked up by desktop environments and other software like browsers. I prefer to
set it to some internal sound card, and not to my main audio interface, to
avoid conflicts.
Warning
|
Order and numbering of sound devices is not fixed and may change on reboot if new hardware is added. |
One important latency factor is the number of samples that the device driver processes at once. For USB devices this is set at boot time:
hw.usb.uaudio.buffer_ms="2"
I highly recommend to use the minimum value here, which is 2 milliseconds of sample data. Apart from reducing the transfer latency, it also has another effect. Even if Jack processes a larger block of samples per cycle, this smoothes out the cycle times.
For non-USB devices have a look at the corresponding man page. If the driver
provides no dedicated knobs, it may be worth a try to lower the generic sound
latency tunables (man sound
):
hw.snd.latency=0 hw.snd.latency_profile=0
These mainly affect the buffering latency, which is irrelevant to Jack. But some device drivers adapt to these tunables and process smaller blocks of samples at once.
Although not directly involved, timing accuracy also plays a role with latency. Inaccurate timer wakeups contribute to buffer over- and underruns, especially with low-latency setups. The following increases overall timing accuracy of the system and is recommended for all use cases:
kern.timecounter.alloweddeviation=0
The only downside is more frequent system wakeups, which translates to higher power and battery consumption on laptops.
Sound devices on FreeBSD accept various sample formats, sample rates and channel configurations. They also support concurrent access with separate volume control per application. This is achieved by an automatic conversion stage in front of the hardware driver, dynamically composed of format conversion, channel mixing and volume control stages as needed.
Conversion stages show up in cat /dev/sndstat
as feeder_format, feeder_mixer
or feeder_volume:
... pcm3: <USB audio> at ? kld snd_uaudio (1p:1v/1r:1v) default snddev flags=0x2e6<AUTOVCHAN,SOFTPCMVOL,BUSY,MPSAFE,REGISTERED,VPC> [pcm3:play:dsp3.p0]: spd 48000, fmt 0x00200010/0x00210000, flags 0x00002100, 0x00000006 interrupts 1053, underruns 0, feed 1052, ready 0 [b:4608/2304/2|bs:4096/2048/2] channel flags=0x2100<BUSY,HAS_VCHAN> {userland} -> feeder_mixer(0x00200010) -> feeder_format(0x00200010 -> 0x00210000) -> {hardware} ...
Note
|
Automatic conversion is not applied to audio interfaces with more than 8 channels. |
While very convenient in general, this behaviour has some drawbacks when using Jack. The conversion stages introduce noticeable latency, irregular processing cycles and hinder buffer management by reporting incorrect buffer statistics.
There are two solutions, depending on how the audio interface is used.
- Bitperfect Mode
-
Completely disable conversion and concurrent access. This makes sense if Jack is the only program to open the device.
/etc/sysctl.confdev.pcm.3.play.vchans=0 dev.pcm.3.rec.vchans=0 dev.pcm.3.bitperfect=1
- Exclusive Mode
-
Configure Jack to open the device in exclusive mode (see Jack configuration). Make sure the device is not used by any other program at the same time. Also we have to set the sample rate and format of the device to match exactly what we want to use with Jack.
/etc/sysctl.confdev.pcm.3.play.vchanformat=s24le:2.0 dev.pcm.3.play.vchanrate=48000 dev.pcm.3.rec.vchanformat=s24le:2.0 dev.pcm.3.rec.vchanrate=48000
When running Jack, we can check cat /dev/sndstat
again to make sure there is
no conversion going on - there should be only feeder_root between userland and
hardware:
... pcm3:play:dsp3.p0[pcm3:virtual:dsp3.vp0]: spd 48000, fmt 0x00210000, flags 0xb000010c, 0x00000001, pid 1779 (jackdbus) interrupts 0, underruns 0, feed 2467, ready 8928 [b:0/0/0|bs:16368/2046/8] channel flags=0xb000010c<RUNNING,TRIGGERED,BUSY,VIRTUAL,BITPERFECT,EXCLUSIVE> {userland} -> feeder_root(0x00210000) -> {hardware} ...
Jack tries to lock part of its memory in RAM, to prevent it from being swapped
out by the operating system. We have to explicitly allow this in
/etc/login.conf
, for the users that want to run Jack. For simplicity I just
change the resource limit of the default login class, search for "memorylocked"
and increase it to at least
... :memorylocked=128M:\ ...
This should be sufficient for Jack. Remember to run
# cap_mkdb /etc/login.conf
afterwards and then logout and login again with the user running Jack.
The FreeBSD scheduler is able to run processes with so-called realtime priority, which means these processes will not be interrupted by other processes, or even drivers. Running Jack with realtime priority can help a great deal to avoid gaps in audio processing, in particular with modern desktop environments.
Traditionally, only root was allowed to run processes at realtime priority.
Starting with FreeBSD 13.1, this privilege can be granted to individual users.
We have to load the mac_priority
kernel module, through
# kldload mac_priority
or at system boot for a permanent setup:
kld_list="mac_priority"
Then we just add the audio user to the realtime
group.
# pw groupmod realtime -m joe
The man pages have more info on this, see man mac_priority
and man rtprio
.
Warning
|
Misbehaving processes running at realtime priority can render a system unusable by starving all other processes. Only selected processes or threads should be run at realtime priority. |
Fortunately, Jack and Jack clients take care of this and only elevate threads to realtime priority when needed. This can be enabled in the Jack settings.
Obviously we have to install Jack from packages (pkg install jackit
) or ports
(audio/jack
) first. Some USB devices provide MIDI ports, they can be made
accessible via audio/jack_umidi
. To make any use of Jack we probably need
additional software like:
- Jack GUI
-
Both
audio/qjackctl
andaudio/cadence
are GUI utilities for managing a Jack server and its audio connections in a graphical way.WarningThe Jack settings produced by the Cadence GUI are incompatible with FreeBSD. QjackCtl creates valid Jack configurations with the oss
driver, but doesn’t support all settings. - DAW
-
I’d suggest
audio/ardour
for full-blown projects, but there’s some alternatives with different scope and varying state of completeness. Likeaudio/muse-sequencer
,audio/traverso
oraudio/zrythm
. - Plugins
-
Search for package names that end in "-lv2", there’s plenty of’em. E.g.
audio/calf-lv2
andaudio/lsp-plugins-lv2
will cover the basics, there’saudio/guitarix-lv2
for guitar effects, andaudio/avldrums-lv2
is a decent MIDI drum set. - Synthesizers and Samplers
-
I have no experience with standalone synthesizers, but searching the packages for "synth" will give you a some options. The same goes for samplers, I only use
audio/hydrogen
to prototype MIDI drum tracks.
Tip
|
A quick and easy way to test Jack sound output is to start Hydrogen and open one of the demos coming with it. |
Starting Jack server via DBUS is the "modern" approach and most current Jack
clients assume that to be the default. Usually DBUS service is required to run
desktop environments anyway, it is enabled in rc.conf
:
dbus_enable="YES"
Then you should be able to configure and run Jack through the jack_control
command line interface (see below). The DBUS approach is quite flexible. It is
common for audio software to start a Jack server on demand if it’s not already
running.
The alternative is to start Jack server by an RC service which can be enabled at boot time:
jackd_enable="YES" # (1) jackd_user="joe" # (2) jackd_rtprio="YES" # (3)
-
Enable Jack RC service at boot time, set "NO" to start manually.
-
The user for which the Jack server is started.
-
Set the system scheduling for Jack server to realtime priority.
This can be convenient for some rather static setups, but it is restricted to a
single audio software user. Configuration is also done in rc.conf
.
The RC script allows to run the Jack server with realtime priority, even when the user itself does not have the rights to do so. In practice this leads to problems, because Jack clients started by the user cannot run with realtime priority.
Thus starting with FreeBSD 13.1, it is recommended to grant realtime privileges
to the audio user through the mac_priority
kernel module, see
Realtime Priority.
Caution
|
Mixing the DBUS and RC service methods can be problematic, make sure you only run one instance of Jack server at a time. |
Let’s focus on the DBUS approach first. Don’t worry, the RC service takes the
same setting parameters, just as CLI arguments. The idea of the DBUS control
interface is to change the current settings as needed, and then start the Jack
server with these settings. Settings are persistently stored in
~/.config/jack/conf.xml
.
# jack_control help
prints an overview of the subcommands of the control interface. Before anything else we have to set the driver backend, OSS in our case.
# jack_control ds oss
Then we can examine the driver specific settings.
# jack_control dp
Individual parameters can be modified as follows:
# jack_control dps rate 48000
Relevant parameters for the OSS driver backend are
- rate
-
Sample rate used by the audio interface, like 44100, 48000, 96000.
- period
-
Length of a Jack processing cycle in samples (per channel). The duration depends on the sample rate, e.g. a period of 384 at 48kHz results in 384 / 48000 = 8ms. A lower value means less overall latency, but also more risk of playback and recording gaps.
TipI recommend to use a multiple of what the device driver processes at once. With 96 (2ms) for a USB device at 48kHz, a period of 192 (4ms), 384 (8ms) or 768 (16ms) would be feasible. If unsure, search the logs for read blocks
andwrite blocks
- Jack tries to detect the block size processed by the driver when opening the device.WarningSome Jack clients expect the period to be a power of two (256, 512, 1024…) but most software and plugins do not rely on that. - nperiods
-
Additional output buffer, in number of periods. Increasing this value by one will increase playback latency by one period. With the default value of 1, the buffer-induced latency stays between 0 and 1 period for input, and between 1 and 2 periods for output in normal operation. Also Jack processing can be 1 period late, before playback and recording gaps occur.
Tip
|
The default of 1 extra period works well in most setups, there’s rarely any need to change this. |
- wordlength
-
Sample size in bits (16, 24, 32).
- inchannels
-
Number of recording channels of the sound device.
- outchannels
-
Number of playback channels of the sound device.
- excl
-
Exclusive access, only Jack can use the sound device while Jack is running. Recommended!
- capture and playback
-
Paths to the recording and playback device. They must represent the same audio interface, or you will likely run into synchronization problems.
- input-latency
-
External recording latency in samples, includes the whole path from analog input through the audio interface to the device driver. This is used for latency correction by Jack clients.
- output-latency
-
Same as input-latency, but for the whole path from device driver to analog output. Also used for latency correction.
The engine parameters affect the general behaviour of Jack, most of them should be left untouched. Available parameters can be listed with
# jack_control ep
- realtime
-
Let Jack and Jack clients elevate specific threads to be scheduled with realtime priority. Set this to
False
if the user does not have realtime privileges, see Realtime Priority. - verbose
-
This can be temporarily enabled to help with debugging. Jack will spit out a lot of details into the logs, turn it off for normal operation.
- sync
-
By default, Jack operates in what it calls asynchronous mode. A processing cycle fetches a period of capture samples first, then outputs the playback samples processed in the previous cycle, and finally lets the clients process the samples of the current cycle. Jack will not wait for all clients to finish, and proceed with the next cycle when time is up.
If synchronous mode is enabled here, Jack will output the playback samples processed in the current cycle, bypassing the extra buffer with one period of latency. This comes at the cost of strictly waiting for all clients to finish, thus clients and plugins that misbehave may quickly render Jack operation unstable.
Sometimes you want to switch between different audio interfaces, or even just different configurations of the same interface. Since GUI tools like QjackCtl and Cadence can’t handle Jack settings correctly on FreeBSD, we have to resort to shell scripts:
#!/bin/sh
# Jack config for the Focusrite 18i20
jack_control dps rate 48000
jack_control dps period 768
jack_control dps wordlength 32
jack_control dps inchannels 18
jack_control dps outchannels 20
jack_control dps input-latency 80
jack_control dps output-latency 80
Jack DBUS server stores its settings persistently, thus we can adjust only the settings that change frome interface to interface, and leave the others untouched. If the device numbers change, symbolic links may be used to create a device alias on the fly.
# ln -s /dev/dsp3 /dev/dsp_jack
If started as an RC service, the settings have to be changed in /etc/rc.conf
before doing a restart of the service.
As mentioned previously, software like Ardour can compensate the playback and recording latency of a system when recording. This not only involves the internal buffer and driver delays, but also the bus communication (PCI, USB), and conversion between analog and digital on the audio interface.
Even with numbers from the interface manufacturer, it is difficult to foretell
the actual latency of a specific setup. So our goal is to measure the latency
of our setup. The easiest way to do this is to create a physical loopback
connection on the audio interface, connecting a playback output back to a
recording input. Then we can measure the roundtrip latency as seen by Jack
clients, using the jack_iodelay
utility.
The idea is to model the recording process, and use a loopback cable in place of the musician. Let’s say you are listening to a drum track on headphones, and recording a guitar performance to that. Then you would use the loopback cable to connect the headphone output to the guitar recording input, for latency measurement. In principle this should also include speaker and effect delays, but that’s usually not practicable.
-
Turn down the volume on the audio interface, where available.
-
Connect the loopback cable to your playback output or a similar (analog) output.
-
Connect the other end to your recording input or a similar (analog) input.
-
Make sure the output is compatible with the input, there are different impedances (guitars, microphones) and line levels.
-
Find out which channels these are mapped to in Jack.
Warning
|
Double-check that your audio interface does not monitor the loopback input on the output - you don’t want to create a feedback loop on the interface! |
If direct monitoring can’t be disabled on the audio interface, there’s the possibility to use a left stereo channel as output and a right channel as input. Given the monitoring is in stereo, not mono.
With the loopback connection in place, we can start the Jack server and do the
measurement. Set the latency correction parameters, input-latency
and
output-latency
, to 0. Make sure the other settings are the same as you would
use for recording. A change of the sample rate for example will produce a
different latency.
-
Start the Jack server.
-
Open a graphical Jack connection manager, like QJackCtl or Catia.
-
Run
jack_iodelay
in a separate terminal, Jack connectors will appear. -
Connect the output of
jack_iodelay
to the loopback playback channel. -
Connect the loopback recording channel to the input of
jack_iodelay
. -
Turn up the volume on the audio interface, if available.
Now jack_iodelay
should print a series of measured latencies, with
corresponding correction parameters for Jack.
... 2157.119 frames 44.940 ms total roundtrip latency extra loopback latency: 1005 frames use 502 for the backend arguments -I and -O ...
Note the correction parameter values, "backend arguments -I and -O" corresponds
to the input-latency
and output-latency
settings. Then disconnect
jack_iodelay
in Jack and stop it in the terminal by pressing Ctrl + C
.
The correction parameters printed will split the total roundtrip latency equally between input and output. Actual latencies may be asymmetric, but that doesn’t matter for recording.
For comparison, in my measurements OSS and USB audio hardware add about 20ms of roundtrip latency. That results in correction parameters of around 480 samples (10ms each) when running at 48kHz. If you get unusually high values, or even unstable values, there may be something wrong with the measurement or your setup.
Note
|
Up until Jack 1.9.21, the minimum internal latency Jack computes is off by one period. This means the measuring method still leads to valid correction parameters, but does not reflect the actual roundtrip latency of OSS and hardware. A fix is underway. |
After setting the latency correction parameters in Jack, recordings should match the timing of existing tracks within about 1-2 ms.