Display lag and audio latency - Some information and problems
Opened this issue · 6 comments
First, i'd like to describe what display lag is. According to Wikipedia, display lag is "a phenomenon associated with some types of LCD displays, and nearly all types of HDTVs, that refers to latency, or lag measured by the difference between the time a signal is input into a display and the time it is shown by the display".
What does this mean? If your display has 30ms display lag, if you press a key to perform an action, assuming it is processed instantly by your computer, you'll see the action performed 30ms later.
It is usually in the 20-60ms range, which for regular games can be okay, but in music games is simply unacceptable. A lot of players usually adapt to the screen they are playing on, but you if start to get good, you will still notice the difference. Personally, I am used to play beatmaniaIIDX CS on my CRT television, and there's no way I can play LR2 on my laptop, just because it has a LED screen (unless I edit the ingame judge timing setting).
So, what we can do to compensate display lag for modern computers? Music games are a combination of eye-hand coordination and ear-hand coordination: if no display lag is present, displayed notes and background music are perfectly in-sync. Display lag introduces an offset between the two (shown notes are delayed in respect to background music), and i've thought of two solutions to at least reduce such offset:
- Delay the background music by the same amount of time of display lag;
- Anticipate shown notes by the same amount of time of display lag.
I personally opted for the second solution in my branch. Besides the obvious GameOptions entry, the only snippet of code that needs to be discussed is this:
// File Render.java, line 483
double y = getViewport() - velocity_integral(now, te.getTime() +
opt.getDisplayDelay(), channel);
which I believe is wrong as it is, because it changes the interval length for the integral, and we need to calculate current display lag pixel offset and not future. Let me try to explain. Let's say we are playing a song with a bpm of 150, with hi-speed 3.5. Display lag (in ms) converts to a certain value of an y offset in pixels, depending on both the bpm and hi-speed. Let's suppose we find ourselves we need to hit a note near a BPM change (let's say it drops to 75), near enough that the note still runs at 150 BPM, but the display lag would make it appear on the red bar after the BPM change has occurred. Since the velocity_integral() method knows where are bpm changes, if you compute that integral, you would calculate the display lag pixel offset for 75 BPM, and not for 150.
Display lag pixel offset should only depend on current hi-speed and bpm (channel bpm if we are taking into account the speed mod), and shouldn't rely on velocity_integral() like it actually does (yes, my shx_display branch had the same problem too).
Here's what I thought It could be done to properly implement display lag:
// File Render.java, line 483
double y = getViewport() - velocity_integral(now, te.getTime(), channel);
double factor = 1.0;
if (xr_speed)
{
switch(channel)
{
case NOTE_1: factor += speed_xR_values.get(0); break;
case NOTE_2: factor += speed_xR_values.get(1); break;
case NOTE_3: factor += speed_xR_values.get(2); break;
case NOTE_4: factor += speed_xR_values.get(3); break;
case NOTE_5: factor += speed_xR_values.get(4); break;
case NOTE_6: factor += speed_xR_values.get(5); break;
case NOTE_7: factor += speed_xR_values.get(6); break;
}
}
double y_offset = opt.getDisplayDelay() * factor * bpm * 0.8 * getViewport() /
BEATS_PER_MSEC;
e.setPos(e.getX(), y + y_offset);
This way the y-pos of the note should be lower than normal. If display lag is 30ms, it will be lower by an amount of pixels that the note covers at its current speed in 30ms. On a display without lag, the player would see the notes hit the red bar before he actually hears the bgm, but since we have lag, he would see the notes on the red bar on time with bgm.
The real issue here is determining what value is proper for display lag. This can be done with an in-game calibration feature, similar to stepmania 5 beta one, but that's not the purpose of this isssue.
I hope this cleared what I meant to do with display lag :)
There's also the sound latency problem that still shackles most of the systems today. Most modern systems operate at around 30-160 ms sound latency, and tearing usually happens when you use a small buffer on a stressed out CPU because it has high demand on real-time processing. It is caused by the conventional signal buffer swapping method where there are two or more buffers are filled and then swapped into the audio device. In this method you have a time difference between the two buffers, and that lag increases when you try out a larger and more stable buffer. The latency gets worse when the buffer is not directly sent to low-level system, but through an apllication layer like DirectX.
What people normally do to fix it is to make the sound play earlier than the game, but that doesn't work here because sound is played on human input. As far as I've seen in rhythm games(like Stepmania and LR2), players can get their combos in sync with the background music, but the sound on their speakers is always slower.
There's no real fix to this problem because it's a problem with bad hardware and bad code.
Solutions:
1a. In Windows, use ASIO, like in LR2, so you don't have to go through another application layer.
1b. In Linux order ALSA/JACK to run in realtime (the user has to do this).
So... if we have a ~30ms display lag and a ~30ms audio lag... why are we going to make any changes? XD
Uh, not really. You are forgetting about the judgement timers. It's true that audio and video would be perfectly sync if both lags are equal, but you'd still hit the note 30ms later, and the judgment will behave consequently.
The ASIO solution sounds good, but then your user needs to have an ASIO compatible sound card, or install ASIO4ALL. I'd rather try to find a way to measure such audio latency: if it doesn't have much jitter, an offset to audio playback could work just fine.
I know I was just joking xD
My point is, compensating audio lag will ruin the keysound sync with actual timed music. Let me explain it with a sequence and an example.
Hit key > play keysound > [problem lies here] input-to-audio lag compensation > output / your ears
Let's say you have a lag of 1 second: the background music is compensated one second earlier. The player responds to the music 1 gametime second later and triggers a keysound. The compensation would be applied on the keysound and the player would hear the keysound 1 second later after he hits the key.
There is no other solution: audio has to be processed in real-time, or as close to real-time as possible. Osu! and Stepmania never had to face this problem because their music only involves one background music and the steps doesn't have keysounds. To them, a simple audio latency calculation is enough.
From playing a lot, and experimentation with the latency compensation system, I think that this could be mark as solved...