Latency Explained.

In terms of computer recording 'audio latency' is defined as the minimum time required for a computer to store a sample from an audio interface e.g. the Novation AudioHub 2x4 into application memory and copy that same sample from application memory to the audio interface output. Tests have shown that the minimum latency the ear can detect is 11ms, so providing the audio latency of a system is under this figure there will be no problems when recording. By looking at the path an audio signal takes in a computer-based recording system and examining what happens to it we can see where and why latency is introduced and what can be done to reduce it.

After the signal enters the interface it must be converted from analogue to the digital domain by an ADC. This operation takes a finite amount of time which will vary from one device to another but is typically 0.5ms. The same degree of latency is introduced when the signal is returned from the computer and is converted back to the analogue domain again. There is nothing that can be done to reduce this but since the latency introduced by converters is so small it is of no great concern.

After the audio has been converted to the digital domain it is passed to a buffer before it is processed by the driver and then passed to the audio application. An audio buffer is a reserved segment of memory used to hold an advance supply of audio data to compensate for momentary delays in processing. The size of an audio buffer is given as the maximum number of samples it can hold. For sound coming from the computer the signal chain is reversed and there is an output buffer before the digital-to-analogue converter. Buffering' introduces latency since a buffer needs to fill up by a certain amount before the data can continue along the chain. How much latency is introduced can be controlled by the user in two main ways:

1. Sample frequency: Buffer size remains fixed no matter what the sample frequency is so the higher the sample frequency, the quicker audio data will pass through the buffer and the lower the latency will be. As an example, audio data will pass though a buffer twice as fast when using 96KHz sample rate as opposed to 48KHz. Most sequencers and sound cards give the option of selecting which sample rate is to be used.

2. Buffer size: The smaller the buffer, the less time it will take for audio data to pass through it, therefore the lower the latency will be. Making the buffer smaller comes at a price however. With a smaller buffer there is less overhead for delays in processing and so the CPU needs to work harder for smaller buffer sizes to ensure that any delays are kept within the time allowed by the buffer.

Audio driver standards designed for low-latency performance (ASIO, Core Audio, WDM etc.) allow you to alter the size of the audio buffers. When using a PC with the Ultranova or AudioHub this is done from the Audio Control Panel. The Audio Control Panel can be accessed from the audio settings window in your music software or from the Start menu (Start/All Programs/Novation/USB Audio & MIDI Driver/Audio Control Panel). The Buffer Length slider changes the input and output buffer sizes, which are reported in the ASIO Settings box below. Add the Input Latency and Output Latency figures to calculate the total system latency.

When using a Mac with a Novation interface and Core Audio you can normally adjust buffer size within the audio application:

If you are experiencing latency when recording and monitoring through your system then reducing the buffer size will help. If your CPU usage is registering too high, perhaps because you are running a lot of plug in instruments and effects, then increasing buffer size can help to decrease it.

A handy feature which the AudioHub 2x4 and Ultranova both offer, and is commonly found on other audio interfaces, is direct monitoring. This allows the user to monitor the signal being recorded direct from the input rather than from the audio application, where it will have been delayed by the converters and buffers. The signal is also sent into the audio application but with 'monitoring' turned off on the track being recorded it will not be heard in the signal coming from the computer. 

Not only does direct monitoring avoid latency in the monitored signal but also it allows you to reduce the amount of CPU used by the audio driver. If you do not use direct monitoring then the maximum acceptable latency owing to the buffers is 10ms in total (allowing 0.5ms for each conversion stage, going by 11ms being the minimum latency the ear can detect). If direct monitoring is used then no matter what the buffer size, there will be no time difference heard in the monitor path between the signal being recorded and the signal from the computer. The only downside to this is that the recorded signal will appear late in the sequencer. If you know the input and output latency, which you can often find from your sequencer, then this is easy to remedy. As an example:

If the input latency and output latency are both 20ms and you are recording a vocalist over a backing track in your sequencer then the vocalist will hear the start of the backing track 20ms after play/record is pressed. The vocalist will sing and hear their voice in time with the music since direct monitoring is being used, however owing to input buffer latency the track will appear an extra 20ms late in the sequencer, a total of 40ms latency. Therefore once the track is recorded, if the start of the track is moved forward in the sequencer by 40ms, no latency in the recording will be detected.

Recording in this manner is possible with the Speedio, Xio and X-Station since the ASIO control panel shows the input and output latency in milliseconds. Increasing the buffer size and dealing with latency in this way is a good option if you need to record whilst running lots of plug-ins and reduce the amount of CPU being used.

Sometimes you may wish to monitor a signal through the computer processed by a plug-in effect or you may wish to monitor a signal generated by a plug-in instrument that you are controlling from a MIDI controller. In these cases direct monitoring is not an option so latency needs to be kept to a minimum. The plug-ins and the driver together may use up a large amount of CPU, depending on the speed of your computer. Streamlining your computer for audio will allow more CPU to be dedicated to audio purposes and minimise the amount used by other processes. The following links give good guides for doing so:

PC - 

Mac - 

The latest laptop processors from Intel use SpeedStep technology which adjusts processing speed for longer battery life and better laptop performance. Sometimes this will cause the CPU to run slower than normal so if you are experiencing CPU overload with an Intel powered laptop then you may find installing and running Speedswitch (free to download from to turn off SpeedStep will improve system performance. For more information on SpeedStep technology visit:

Further information on latency:

Info on Mac audio drivers:

Info on PC audio drivers:

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request


Didn't find what you were looking for?

Search again using our search tool.