Upvote Upvoted 8 Downvote Downvoted
1 2
120fps on 60hz Monitor
1
#1
0 Frags +

Anyone feel like writing a defense of maxing out framerates in multiplayer games? I've been reading a lot of arguments recently that there's no point in running a framerate higher than 60 if you have a 60hz monitor. As far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.

Anyone feel like writing a defense of maxing out framerates in multiplayer games? I've been reading a lot of arguments recently that there's no point in running a framerate higher than 60 if you have a 60hz monitor. As far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.
2
#2
9 Frags +

I recently learned that how I thought vsync worked was wrong, and now knowing the way it really does work, I think it would be worthwhile to make sure everyone here understands it.

What is VSync? VSync stands for Vertical Synchronization. The basic idea is that synchronizes your FPS with your monitor's refresh rate. The purpose is to eliminate something called "tearing". I will describe all these things here.

Every CRT monitor has a refresh rate. It's specified in Hz (Hertz, cycles per second). It is the number of times the monitor updates the display per second. Different monitors support different refresh rates at different resolutions. They range from 60Hz at the low end up to 100Hz and higher. Note that this isn't your FPS as your games report it. If your monitor is set at a specific refresh rate, it always updates the screen at that rate, even if nothing on it is changing. On an LCD, things work differently. Pixels on an LCD stay lit until they are told to change; they don't have to be refreshed. However, because of how VGA (and DVI) works, the LCD must still poll the video card at a certain rate for new frames. This is why LCD's still have a "refresh rate" even though they don't actually have to refresh.

I think everyone here understands FPS. It's how many frames the video card can draw per second. Higher is obviously better. However, during a fast paced game, your FPS rarely stays the same all the time. It moves around as the complexity of the image the video card has to draw changes based on what you are seeing. This is where tearing comes in.

Tearing is a phenomenon that gives a disjointed image. The idea is as if you took a photograph of something, then rotated your vew maybe just 1 degree to the left and took a photograph of that, then cut the two pictures in half and taped the top half of one to the bottom half of the other. The images would be similar but there would be a notable difference in the top half from the bottom half. This is what is called tearing on a visual display. It doesn't always have to be cut right in the middle. It can be near the top or the bottom and the separation point can actually move up or down the screen, or seem to jump back and forth between two points.

Why does this happen? Lets take a specific example. Let's say your monitor is set to a refresh rate of 75Hz. You're playing your favorite game and you're getting 100FPS right now. That means that the mointor is updating itself 75 times per second, but the video card is updating the display 100 times per second, that's 33% faster than the mointor. So that means in the time between screen updates, the video card has drawn one frame and a third of another one. That third of the next frame will overwrite the top third of the previous frame and then get drawn on the screen. The video card then finishes the last 2 thirds of that frame, and renders the next 2 thirds of the next frame and then the screen updates again. As you can see this would cause this tearing effect as 2 out of every 3 times the screen updates, either the top third or bottom third is disjointed from the rest of the display. This won't really be noticeable if what is on the screen isn't changing much, but if you're looking around quickly or what not this effect will be very apparant.

Now this is where the common misconception comes in. Some people think that the solution to this problem is to simply create an FPS cap equal to the refresh rate. So long as the video card doesn't go faster than 75 FPS, everything is fine, right? Wrong.

Before I explain why, let me talk about double-buffering. Double-buffering is a technique that mitigates the tearing problem somewhat, but not entirely. Basically you have a frame buffer and a back buffer. Whenever the monitor grabs a frame to refresh with, it pulls it from the frame buffer. The video card draws new frames in the back buffer, then copies it to the frame buffer when it's done. However the copy operation still takes time, so if the monitor refreshes in the middle of the copy operation, it will still have a torn image.

VSync solves this problem by creating a rule that says the back buffer can't copy to the frame buffer until right after the monitor refreshes. With a framerate higher than the refresh rate, this is fine. The back buffer is filled with a frame, the system waits, and after the refresh, the back buffer is copied to the frame buffer and a new frame is drawn in the back buffer, effectively capping your framerate at the refresh rate.

That's all well and good, but now let's look at a different example. Let's say you're playing the sequel to your favorite game, which has better graphics. You're at 75Hz refresh rate still, but now you're only getting 50FPS, 33% slower than the refresh rate. That means every time the monitor updates the screen, the video card draws 2/3 of the next frame. So lets track how this works. The monitor just refreshed, and frame 1 is copied into the frame buffer. 2/3 of frame 2 gets drawn in the back buffer, and the monitor refreshes again. It grabs frame 1 from the frame buffer for the first time. Now the video card finishes the last third of frame 2, but it has to wait, because it can't update until right after a refresh. The monitor refreshes, grabbing frame 1 the second time, and frame 2 is put in the frame buffer. The video card draws 2/3 of frame 3 in the back buffer, and a refresh happens, grabbing frame 2 for the first time. The last third of frame 3 is draw, and again we must wait for the refresh, and when it happens, frame 2 is grabbed for the second time, and frame 3 is copied in. We went through 4 refresh cycles but only 2 frames were drawn. At a refresh rate of 75Hz, that means we'll see 37.5FPS. That's noticeably less than 50FPS which the video card is capable of. This happens because the video card is forced to waste time after finishing a frame in the back buffer as it can't copy it out and it has nowhere else to draw frames.

Essentially this means that with double-buffered VSync, the framerate can only be equal to a discrete set of values equal to Refresh / N where N is some positive integer. That means if you're talking about 60Hz refresh rate, the only framerates you can get are 60, 30, 20, 15, 12, 10, etc etc. You can see the big gap between 60 and 30 there. Any framerate between 60 and 30 your video card would normally put out would get dropped to 30.

Now maybe you can see why people loathe it. Let's go back to the original example. You're playing your favorite game at 75Hz refresh and 100FPS. You turn VSync on, and the game limits you to 75FPS. No problem, right? Fixed the tearing issue, it looks better. You get to an area that's particularly graphically intensive, an area that would drop your FPS down to about 60 without VSync. Now your card cannot do the 75FPS it was doing before, and since VSync is on, it has to do the next highest one on the list, which is 37.5FPS. So now your game which was running at 75FPS just halved it's framerate to 37.5 instantly. Whether or not you find 37.5FPS smooth doesn't change the fact that the framerate just cut in half suddenly, which you would notice. This is what people hate about it.

If you're playing a game that has a framerate that routinely stays above your refresh rate, then VSync will generally be a good thing. However if it's a game that moves above and below it, then VSync can become annoying. Even worse, if the game plays at an FPS that is just below the refresh rate (say you get 65FPS most of the time on a refresh rate of 75Hz), the video card will have to settle for putting out much less FPS than it could (37.5FPS in that instance). This second example is where the percieved drop in performance comes in. It looks like VSync just killed your framerate. It did, technically, but it isn't because it's a graphically intensive operation. It's simply the way it works.

I recently learned that how I thought vsync worked was wrong, and now knowing the way it really does work, I think it would be worthwhile to make sure everyone here understands it.

What is VSync? VSync stands for Vertical Synchronization. The basic idea is that synchronizes your FPS with your monitor's refresh rate. The purpose is to eliminate something called "tearing". I will describe all these things here.

Every CRT monitor has a refresh rate. It's specified in Hz (Hertz, cycles per second). It is the number of times the monitor updates the display per second. Different monitors support different refresh rates at different resolutions. They range from 60Hz at the low end up to 100Hz and higher. Note that this isn't your FPS as your games report it. If your monitor is set at a specific refresh rate, it always updates the screen at that rate, even if nothing on it is changing. On an LCD, things work differently. Pixels on an LCD stay lit until they are told to change; they don't have to be refreshed. However, because of how VGA (and DVI) works, the LCD must still poll the video card at a certain rate for new frames. This is why LCD's still have a "refresh rate" even though they don't actually have to refresh.

I think everyone here understands FPS. It's how many frames the video card can draw per second. Higher is obviously better. However, during a fast paced game, your FPS rarely stays the same all the time. It moves around as the complexity of the image the video card has to draw changes based on what you are seeing. This is where tearing comes in.

Tearing is a phenomenon that gives a disjointed image. The idea is as if you took a photograph of something, then rotated your vew maybe just 1 degree to the left and took a photograph of that, then cut the two pictures in half and taped the top half of one to the bottom half of the other. The images would be similar but there would be a notable difference in the top half from the bottom half. This is what is called tearing on a visual display. It doesn't always have to be cut right in the middle. It can be near the top or the bottom and the separation point can actually move up or down the screen, or seem to jump back and forth between two points.

Why does this happen? Lets take a specific example. Let's say your monitor is set to a refresh rate of 75Hz. You're playing your favorite game and you're getting 100FPS right now. That means that the mointor is updating itself 75 times per second, but the video card is updating the display 100 times per second, that's 33% faster than the mointor. So that means in the time between screen updates, the video card has drawn one frame and a third of another one. That third of the next frame will overwrite the top third of the previous frame and then get drawn on the screen. The video card then finishes the last 2 thirds of that frame, and renders the next 2 thirds of the next frame and then the screen updates again. As you can see this would cause this tearing effect as 2 out of every 3 times the screen updates, either the top third or bottom third is disjointed from the rest of the display. This won't really be noticeable if what is on the screen isn't changing much, but if you're looking around quickly or what not this effect will be very apparant.

Now this is where the common misconception comes in. Some people think that the solution to this problem is to simply create an FPS cap equal to the refresh rate. So long as the video card doesn't go faster than 75 FPS, everything is fine, right? Wrong.

Before I explain why, let me talk about double-buffering. Double-buffering is a technique that mitigates the tearing problem somewhat, but not entirely. Basically you have a frame buffer and a back buffer. Whenever the monitor grabs a frame to refresh with, it pulls it from the frame buffer. The video card draws new frames in the back buffer, then copies it to the frame buffer when it's done. However the copy operation still takes time, so if the monitor refreshes in the middle of the copy operation, it will still have a torn image.

VSync solves this problem by creating a rule that says the back buffer can't copy to the frame buffer until right after the monitor refreshes. With a framerate higher than the refresh rate, this is fine. The back buffer is filled with a frame, the system waits, and after the refresh, the back buffer is copied to the frame buffer and a new frame is drawn in the back buffer, effectively capping your framerate at the refresh rate.

That's all well and good, but now let's look at a different example. Let's say you're playing the sequel to your favorite game, which has better graphics. You're at 75Hz refresh rate still, but now you're only getting 50FPS, 33% slower than the refresh rate. That means every time the monitor updates the screen, the video card draws 2/3 of the next frame. So lets track how this works. The monitor just refreshed, and frame 1 is copied into the frame buffer. 2/3 of frame 2 gets drawn in the back buffer, and the monitor refreshes again. It grabs frame 1 from the frame buffer for the first time. Now the video card finishes the last third of frame 2, but it has to wait, because it can't update until right after a refresh. The monitor refreshes, grabbing frame 1 the second time, and frame 2 is put in the frame buffer. The video card draws 2/3 of frame 3 in the back buffer, and a refresh happens, grabbing frame 2 for the first time. The last third of frame 3 is draw, and again we must wait for the refresh, and when it happens, frame 2 is grabbed for the second time, and frame 3 is copied in. We went through 4 refresh cycles but only 2 frames were drawn. At a refresh rate of 75Hz, that means we'll see 37.5FPS. That's noticeably less than 50FPS which the video card is capable of. This happens because the video card is forced to waste time after finishing a frame in the back buffer as it can't copy it out and it has nowhere else to draw frames.

Essentially this means that with double-buffered VSync, the framerate can only be equal to a discrete set of values equal to Refresh / N where N is some positive integer. That means if you're talking about 60Hz refresh rate, the only framerates you can get are 60, 30, 20, 15, 12, 10, etc etc. You can see the big gap between 60 and 30 there. Any framerate between 60 and 30 your video card would normally put out would get dropped to 30.

Now maybe you can see why people loathe it. Let's go back to the original example. You're playing your favorite game at 75Hz refresh and 100FPS. You turn VSync on, and the game limits you to 75FPS. No problem, right? Fixed the tearing issue, it looks better. You get to an area that's particularly graphically intensive, an area that would drop your FPS down to about 60 without VSync. Now your card cannot do the 75FPS it was doing before, and since VSync is on, it has to do the next highest one on the list, which is 37.5FPS. So now your game which was running at 75FPS just halved it's framerate to 37.5 instantly. Whether or not you find 37.5FPS smooth doesn't change the fact that the framerate just cut in half suddenly, which you would notice. This is what people hate about it.

If you're playing a game that has a framerate that routinely stays above your refresh rate, then VSync will generally be a good thing. However if it's a game that moves above and below it, then VSync can become annoying. Even worse, if the game plays at an FPS that is just below the refresh rate (say you get 65FPS most of the time on a refresh rate of 75Hz), the video card will have to settle for putting out much less FPS than it could (37.5FPS in that instance). This second example is where the percieved drop in performance comes in. It looks like VSync just killed your framerate. It did, technically, but it isn't because it's a graphically intensive operation. It's simply the way it works.
3
#3
8 Frags +

All hope is not lost however. There is a technique called triple-buffering that solves this VSync problem. Lets go back to our 50FPS, 75Hz example. Frame 1 is in the frame buffer, and 2/3 of frame 2 are drawn in the back buffer. The refresh happens and frame 1 is grabbed for the first time. The last third of frame 2 are drawn in the back buffer, and the first third of frame 3 is drawn in the second back buffer (hence the term triple-buffering). The refresh happens, frame 1 is grabbed for the second time, and frame 2 is copied into the frame buffer and the first part of frame 3 into the back buffer. The last 2/3 of frame 3 are drawn in the back buffer, the refresh happens, frame 2 is grabbed for the first time, and frame 3 is copied to the frame buffer. The process starts over. This time we still got 2 frames, but in only 3 refresh cycles. That's 2/3 of the refresh rate, which is 50FPS, exactly what we would have gotten without it. Triple-buffering essentially gives the video card someplace to keep doing work while it waits to transfer the back buffer to the frame buffer, so it doesn't have to waste time. Unfortunately, triple-buffering isn't available in every game, and in fact it isn't too common. It also can cost a little performance to utilize, as it requires extra VRAM for the buffers, and time spent copying all of them around. However, triple-buffered VSync really is the key to the best experience as you eliminate tearing without the downsides of normal VSync (unless you consider the fact that your FPS is capped a downside... which is silly because you can't see an FPS higher than your refresh anyway).

I hope this was informative, and will help people understand the intracacies of VSync (and hopefully curb the "VSync, yes or no?" debates!). Generally, if triple buffering isn't available, you have to decide whether the discrete framerate limitations of VSync and the issues that can cause are worth the visual improvement of the elimination of tearing. It's a personal preference, and it's entirely up to you.


It is impossible to get more FPS than your refresh rate on your monitor.

This is not true. If you use VSync, you will lock your FPS to your refresh rate. The thing is if you have it off, your GPU can render as many frames as it can before your monitor refreshes. Now, here is the thing. Let's say the refresh rate on your monitor is 60Hz, and you are getting 120FPS in your game. So, with VSync off, your GPU will render 2 frames before your monitor updates, so your monitor will update every other frame basically. (Frame 1, 3, 5, 7, etc.) So in reality it is actually skipping frames. Now, tearing will only occur if your GPU is rendering frames proportionally to your monitor's refreshes. Lets say your refresh rate is 60Hz and your FPS is 100. In this case, your GPU will render 1 frame and 2/3 of another frame, and that will cause a tear. I am not 100% sure on this one, but it will make sense that depending on the percentage of the next frame that is rendered, will tell where the tear will actually be on your screen. For example if you do 1 and 2/3 frames, the tear would be roughly 2/3 down your screen.

Should I use VSync or not?
Well, I would say if you have the hardware to handle it, you may as well. What I mean is if you have a high end CRT with 100Hz+ refresh, then it would probably be better to use VSync. But even if you have 100Hz refresh rate I would not use VSync if your actual FPS is over 100, so if you have VSync off and your FPS is 200 or more, I would just leave it off. Also, if you have a refresh rate of lets say 100Hz, but your video card can only do 90 FPS, your GPU can only do 9/10 of every frame, and you would only render 1 frame every 2 refreshes, and that would cut your frame rate down to 50. You can improve that by enabling Triple Buffering however, but it will eat more of your resources.

Plagiarized from http://www.overclock.net/t/371648/info-explanation-of-fps-vs-refresh-rate

There's also this paper from BBC R&D about 300fps
http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP169.pdf

All hope is not lost however. There is a technique called triple-buffering that solves this VSync problem. Lets go back to our 50FPS, 75Hz example. Frame 1 is in the frame buffer, and 2/3 of frame 2 are drawn in the back buffer. The refresh happens and frame 1 is grabbed for the first time. The last third of frame 2 are drawn in the back buffer, and the first third of frame 3 is drawn in the second back buffer (hence the term triple-buffering). The refresh happens, frame 1 is grabbed for the second time, and frame 2 is copied into the frame buffer and the first part of frame 3 into the back buffer. The last 2/3 of frame 3 are drawn in the back buffer, the refresh happens, frame 2 is grabbed for the first time, and frame 3 is copied to the frame buffer. The process starts over. This time we still got 2 frames, but in only 3 refresh cycles. That's 2/3 of the refresh rate, which is 50FPS, exactly what we would have gotten without it. Triple-buffering essentially gives the video card someplace to keep doing work while it waits to transfer the back buffer to the frame buffer, so it doesn't have to waste time. Unfortunately, triple-buffering isn't available in every game, and in fact it isn't too common. It also can cost a little performance to utilize, as it requires extra VRAM for the buffers, and time spent copying all of them around. However, triple-buffered VSync really is the key to the best experience as you eliminate tearing without the downsides of normal VSync (unless you consider the fact that your FPS is capped a downside... which is silly because you can't see an FPS higher than your refresh anyway).

I hope this was informative, and will help people understand the intracacies of VSync (and hopefully curb the "VSync, yes or no?" debates!). Generally, if triple buffering isn't available, you have to decide whether the discrete framerate limitations of VSync and the issues that can cause are worth the visual improvement of the elimination of tearing. It's a personal preference, and it's entirely up to you.

[b]
It is impossible to get more FPS than your refresh rate on your monitor.[/b]
This is not true. If you use VSync, you will lock your FPS to your refresh rate. The thing is if you have it off, your GPU can render as many frames as it can before your monitor refreshes. Now, here is the thing. Let's say the refresh rate on your monitor is 60Hz, and you are getting 120FPS in your game. So, with VSync off, your GPU will render 2 frames before your monitor updates, so your monitor will update every other frame basically. (Frame 1, 3, 5, 7, etc.) So in reality it is actually skipping frames. Now, tearing will only occur if your GPU is rendering frames proportionally to your monitor's refreshes. Lets say your refresh rate is 60Hz and your FPS is 100. In this case, your GPU will render 1 frame and 2/3 of another frame, and that will cause a tear. I am not 100% sure on this one, but it will make sense that depending on the percentage of the next frame that is rendered, will tell where the tear will actually be on your screen. For example if you do 1 and 2/3 frames, the tear would be roughly 2/3 down your screen.

[b]Should I use VSync or not?[/b]
Well, I would say if you have the hardware to handle it, you may as well. What I mean is if you have a high end CRT with 100Hz+ refresh, then it would probably be better to use VSync. But even if you have 100Hz refresh rate I would not use VSync if your actual FPS is over 100, so if you have VSync off and your FPS is 200 or more, I would just leave it off. Also, if you have a refresh rate of lets say 100Hz, but your video card can only do 90 FPS, your GPU can only do 9/10 of every frame, and you would only render 1 frame every 2 refreshes, and that would cut your frame rate down to 50. You can improve that by enabling Triple Buffering however, but it will eat more of your resources.


Plagiarized from http://www.overclock.net/t/371648/info-explanation-of-fps-vs-refresh-rate

There's also this paper from BBC R&D about 300fps
http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP169.pdf
4
#4
-5 Frags +

I has 120 fps with my 60hz monitor. But am I really getting what it tells me???????? I can even put it at 300 fps if I wanted.

I has 120 fps with my 60hz monitor. But am I really getting what it tells me???????? I can even put it at 300 fps if I wanted.
5
#5
3 Frags +

Benefits of Higher FPS
better/more accurate mouse and keyboard inputs
several games do a lot of calculations that are more accurate at higher frame rates
for low gtg monitors (VA and TN panels) Screen tearing isn't that bad and usually makes images more fluid

FPS Caps
In TF2 FPS caps add a lot of input lag if you're hitting it 100% of the time, the input lag mostly affect initial mouse movement from a static position and mouse direction changes a 120FPS cap would add about 8.3ms of Input lag to every "new" movement and adding 0 input lag to consistent movement.

[b]Benefits of Higher FPS[/b]
better/more accurate mouse and keyboard inputs
several games do a lot of calculations that are more accurate at higher frame rates
for low gtg monitors (VA and TN panels) Screen tearing isn't that bad and usually makes images more fluid


[b]FPS Caps[/b]
In TF2 FPS caps add a lot of input lag if you're hitting it 100% of the time, the input lag mostly affect initial mouse movement from a static position and mouse direction changes a 120FPS cap would add about 8.3ms of Input lag to every "new" movement and adding 0 input lag to consistent movement.
6
#6
0 Frags +

RE: Vsync, the simple way to put it is that if your screen looks like this (google images link) you should turn vsync on. This is good information but it doesn't answer my main question so I'll put it differently:

Let's say you're a competitive TF2 player on a 60hz monitor who regularly gets 100-120 fps according to the game's fps charts. Valve releases a graphical update for TF2 that makes everything look really cool but provides no gameplay advantage/disadvantage. (You can still see the roamer's shadow through the wall.) Turning the new options on your framerate drops to 60-65 fps. It's still above your monitor's refresh rate, so will it make a gameplay difference? Technically you're getting only half of the frames you were getting before. Should you opt out of the new graphics update?

RE: Vsync, the simple way to put it is that if your screen looks like [url=https://www.google.com/search?q=screen+tearing&espv=2&biw=1920&bih=971&site=webhp&source=lnms&tbm=isch&sa=X&ei=LhYUVfW3GZTtoATEzIL4Cw&ved=0CAYQ_AUoAQ#imgdii=_]this[/url] (google images link) you should turn vsync on. This is good information but it doesn't answer my main question so I'll put it differently:

Let's say you're a competitive TF2 player on a 60hz monitor who regularly gets 100-120 fps according to the game's fps charts. Valve releases a graphical update for TF2 that makes everything look really cool but provides no gameplay advantage/disadvantage. (You can still see the roamer's shadow through the wall.) Turning the new options on your framerate drops to 60-65 fps. It's still above your monitor's refresh rate, so will it make a gameplay difference? Technically you're getting only half of the frames you were getting before. Should you opt out of the new graphics update?
7
#7
3 Frags +

Depends on the game. In single player if you get consistently fps > refresh rate it's an easy choice. no tearing > less input lag -> vsync, less input lag > no tearing -> uncapped, no vsync.

In multiplayer however you have to keep the networking in mind. Some games do interesting stuff on certain framerates. Quake comes to mind. The physics engine used to run at 125 tickrate and iirc even quake live servers still run at 125/250 tickrate so anything that didn't sync up with the 8ms/4ms tickrate would cause interesting stuff to happen, e.g. firerates and jump height changing. Same thing for TF2, 60Hz/16.667 and 66.67 tickrate/15ms obviously don't gel well.

tl;dr
TF2 is smoother at higher fps.

Depends on the game. In single player if you get consistently fps > refresh rate it's an easy choice. no tearing > less input lag -> vsync, less input lag > no tearing -> uncapped, no vsync.

In multiplayer however you have to keep the networking in mind. Some games do interesting stuff on certain framerates. Quake comes to mind. The physics engine used to run at 125 tickrate and iirc even quake live servers still run at 125/250 tickrate so anything that didn't sync up with the 8ms/4ms tickrate would cause interesting stuff to happen, e.g. firerates and jump height changing. Same thing for TF2, 60Hz/16.667 and 66.67 tickrate/15ms obviously don't gel well.

tl;dr
TF2 is smoother at higher fps.
8
#8
0 Frags +

I have a computer that can run TF2 consistently at 144 fps, so I have vertical sync on. It also supports triple buffering, and I can play at the twice refresh rate +1 figure that people tend to recommend 100% of the time. Which should I go for to minimize input lag?

I have a computer that can run TF2 consistently at 144 fps, so I have vertical sync on. It also supports triple buffering, and I can play at the twice refresh rate +1 figure that people tend to recommend 100% of the time. Which should I go for to minimize input lag?
9
#9
1 Frags +

Quoting myself:

If you have 60fps on a 60hz monitor, your base latency is 0 to 1/60th of a second (0~16ms). If you have 120fps on a 60hz monitor, your base latency is 0 to 1/90th (0~11ms) of a second.

The actual rate of displayed frames is a hard limit of 1/60th of a second (16ms), and the game's rendering rate is another up to 1/fps on top of that, since they don't sync with eachother (and when you do sync them with eachother/vsync it raises your latency floor)

Exactly 60fps on a gsync/freesync monitor would be better than 120fps on a normal 60hz, unless said monitor has higher processing latency or is set up incorrectly.

Anyway, you have all these interlocking rates at which things are sent off to other systems in your computer. The general flow is input -> game -> render -> monitor. The render -> monitor stage is infamous for all sorts of crazy convolution.

When you have a monitor that runs at a set refresh rate, there's no way to render the same number of frames and make sure you still have the lowest possible latency. The reason is that, abstractly speaking, the monitor has to look at the last frame that was rendered, and if it's rendered at the same rate, it's going to generally be half a frame behind. If it's rendered at a higher rate, that window decreases.

If you run at a slightly higher framerate, it makes the latency window increase up to the length of a game frame, go back to 0, and decrease again. It looks like this:

http://i.imgur.com/RebCGsf.png

(Keep in mind that the game update has its own latency, since it needs a consistent input state for its entire duration, it has to sample input at its beginning)

If we lived in a perfect world, the game engine could tell how far behind the monitor it is, and adjust. But we don't, so increasing the framerate dramatically or using a gsync/freesync monitor is what we've got.

Quoting myself:

If you have 60fps on a 60hz monitor, your base latency is 0 to 1/60th of a second (0~16ms). If you have 120fps on a 60hz monitor, your base latency is 0 to 1/90th (0~11ms) of a second.

The actual rate of displayed frames is a hard limit of 1/60th of a second (16ms), and the game's rendering rate is another up to 1/fps on top of that, since they don't sync with eachother (and when you do sync them with eachother/vsync it raises your latency floor)

Exactly 60fps on a gsync/freesync monitor would be better than 120fps on a normal 60hz, unless said monitor has higher processing latency or is set up incorrectly.

Anyway, you have all these interlocking rates at which things are sent off to other systems in your computer. The general flow is input -> game -> render -> monitor. The render -> monitor stage is infamous for all sorts of crazy convolution.

When you have a monitor that runs at a set refresh rate, there's no way to render the same number of frames and make sure you still have the lowest possible latency. The reason is that, abstractly speaking, the monitor has to look at the last frame that was rendered, and if it's rendered at the same rate, it's going to generally be half a frame behind. If it's rendered at a higher rate, that window decreases.

If you run at a slightly higher framerate, it makes the latency window increase up to the length of a game frame, go back to 0, and decrease again. It looks like this:

http://i.imgur.com/RebCGsf.png

(Keep in mind that the game update has its own latency, since it needs a consistent input state for its entire duration, it has to sample input at its beginning)

If we lived in a perfect world, the game engine could tell how far behind the monitor it is, and adjust. But we don't, so increasing the framerate dramatically or using a gsync/freesync monitor is what we've got.
10
#10
2 Frags +

to minimize input lag you should run w/ vsync off as most games do not have proper triple buffering- instead they use a render ahead method which actually creates even more input lag:

UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

From the update at the bottom of this article: http://www.anandtech.com/print/2794/

to minimize input lag you should run w/ vsync off as most games do not have proper triple buffering- instead they use a render ahead method which actually creates even more input lag:

[quote]UPDATE: There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. [b]Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). [/b]

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). [/quote]
From the update at the bottom of this article: http://www.anandtech.com/print/2794/
11
#11
0 Frags +

"classical" triple buffering also adds input lag but only when there would otherwise be noticable tearing.

Thanks for noting the important distinction that DX triple buffering has, however.

"classical" triple buffering also adds input lag but only when there would otherwise be noticable tearing.

Thanks for noting the important distinction that DX triple buffering has, however.
12
#12
-7 Frags +

lol i get 200-300 fps at all times and use a 60hz monitor

lol i get 200-300 fps at all times and use a 60hz monitor
13
#13
2 Frags +

to come to the conclusion that more than 60fps on a 60hz monitor is not beneficial, one has to be that special combination of purely theoretical and yet unintelligent.

the lag is additional. the monitor and the game frames are not synced up. think of them as independent. the game renders the frames and then the monitor samples them. the more frames the game renders per second, the more up to date the most recent frame is. when the monitor takes a frame, it will get a more recent one. resulting in less lag.

in terms of numbers at 60hz were looking at 16ms lag from the monitor.
there is an additional 0-16ms lag (average 8ms) with a framerate of 60fps. with 120fps it would be 0-8ms (average 4ms). therefore you will get an average of 4ms less latency with 120fps vs 60fps. it is also more consistent with 120fps varying within 8ms while 60fps varies within 16ms.
this range depends on when the monitor happens to take its sample. the sample time will fluctuate as the monitor and game cannot be expected to stay in an exact sync of any kind.

so 60hz/60fps = 16ms to 32ms lag (~24ms)
60hz/120fps = 16 to 24ms lag (~20ms)
60hz/240fps = 16 to 20ms lag (~18ms)
60hz/480fps = 16 to 18ms lag (~17ms)
120hz/60fps = 8 to 24ms lag (~16ms)
120hz/120fps = 8 to 16ms lag (~12ms)
120hz/240fps = 8 to 12ms lag (~10ms)
120hz/480fps = 8 to 10ms lag (~9ms)
144hz/480fps = 7 to 9ms lag (~8ms)

as you can see the higher the fps the better, with diminishing returns.

to come to the conclusion that more than 60fps on a 60hz monitor is not beneficial, one has to be that special combination of purely theoretical and yet unintelligent.

the lag is additional. the monitor and the game frames are not synced up. think of them as independent. the game renders the frames and then the monitor samples them. the more frames the game renders per second, the more up to date the most recent frame is. when the monitor takes a frame, it will get a more recent one. resulting in less lag.

in terms of numbers at 60hz were looking at 16ms lag from the monitor.
there is an additional 0-16ms lag (average 8ms) with a framerate of 60fps. with 120fps it would be 0-8ms (average 4ms). therefore you will get an average of 4ms less latency with 120fps vs 60fps. it is also more consistent with 120fps varying within 8ms while 60fps varies within 16ms.
this range depends on when the monitor happens to take its sample. the sample time will fluctuate as the monitor and game cannot be expected to stay in an exact sync of any kind.

so 60hz/60fps = 16ms to 32ms lag (~24ms)
60hz/120fps = 16 to 24ms lag (~20ms)
60hz/240fps = 16 to 20ms lag (~18ms)
60hz/480fps = 16 to 18ms lag (~17ms)
120hz/60fps = 8 to 24ms lag (~16ms)
120hz/120fps = 8 to 16ms lag (~12ms)
120hz/240fps = 8 to 12ms lag (~10ms)
120hz/480fps = 8 to 10ms lag (~9ms)
144hz/480fps = 7 to 9ms lag (~8ms)

as you can see the higher the fps the better, with diminishing returns.
14
#14
0 Frags +

It's also important to use the best monitor connection you have. HDMI 1.0 is supposed to saturate at 24-bit 1080p60hz (sometimes I read 1600x1200x60hz which is slightly more), which means that it's constantly transmitting frames, so transmitting a single frame will take 1/60th of a second, increasing "how far in the past" the frame already is by that much. I'm not sure exactly how each protocol works, but that should be a hard limitation. Newer versions of HDMI and DisplayPort provide much higher bandwidth, which should let a single frame be transmitted during less time, and if the protocols are any good at all they should already do that.

If you can't then running at 800x600 fullscreen should actually decrease lag too.

It's also important to use the best monitor connection you have. HDMI 1.0 is supposed to saturate at 24-bit 1080p60hz (sometimes I read 1600x1200x60hz which is slightly more), which means that it's constantly transmitting frames, so transmitting a single frame will take 1/60th of a second, increasing "how far in the past" the frame already is by that much. I'm not sure exactly how each protocol works, but that should be a hard limitation. Newer versions of HDMI and DisplayPort provide much higher bandwidth, which should let a single frame be transmitted during less time, and if the protocols are any good at all they should already do that.

If you can't then running at 800x600 fullscreen should actually decrease lag too.
15
#15
3 Frags +

#14
Correction: It's 1920x1200p 60Hz for HDMI 1.0-1.2. 1600x1200p is less than 1920x1080p. 1.3 is 2560x1600p 60Hz. 1.4 is the same bandwidth as 1.3 but adds 3D (1920x1080p 24Hz) and 4K (3840x2160p 30Hz, 4096x2160p 24Hz). 2.0 doubles the bandwidth again (4K 60Hz, 4K 3D 30/24Hz, 1080p 3D 60Hz).

Or in terms of transfer time for 1920x1080p:
1.0-1.2: ~16ms
1.3-1.4: ~8ms
2.0: ~4ms

#14
Correction: It's 1920x1200p 60Hz for HDMI 1.0-1.2. 1600x1200p is less than 1920x1080p. 1.3 is 2560x1600p 60Hz. 1.4 is the same bandwidth as 1.3 but adds 3D (1920x1080p 24Hz) and 4K (3840x2160p 30Hz, 4096x2160p 24Hz). 2.0 doubles the bandwidth again (4K 60Hz, 4K 3D 30/24Hz, 1080p 3D 60Hz).

Or in terms of transfer time for 1920x1080p:
1.0-1.2: ~16ms
1.3-1.4: ~8ms
2.0: ~4ms
16
#16
0 Frags +

FWIW, most of the arguments in this thread are intimately tied to how idTech/GoldSrc/Source are designed, which might explain your impression that:

BentomatAs far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.
FWIW, most of the arguments in this thread are intimately tied to how idTech/GoldSrc/Source are designed, which might explain your impression that:

[quote=Bentomat]As far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.[/quote]
17
#17
0 Frags +

Based Setsul

>FWIW, most of the arguments in this thread are intimately tied to how idTech/GoldSrc/Source are designed, which might explain your impression that:

No, they're tied to how PC hardware sync in general works. Universally. You'll only find "good" video/audio timing on old consoles such as the NES and SNES, where games were intimately designed around hardware timing.

Based Setsul

>FWIW, most of the arguments in this thread are intimately tied to how idTech/GoldSrc/Source are designed, which might explain your impression that:

No, they're tied to how PC hardware sync in general works. Universally. You'll only find "good" video/audio timing on old consoles such as the NES and SNES, where games were intimately designed around hardware timing.
18
#18
3 Frags +

I forgot to mention, transfer time is irrelevant, other than limiting the refresh rate. Any low input lag monitor should be using progressive scan as in the first horizontal line is transmitted and immediately displayed (well the pixels still take a few milliseconds to change, but they start changing at that time). Right after that the next line will finish transmission and get displayed and so on. And after the bottom line starts refreshing the whole thing starts over at the top. No delay. So while the first line is only delayed by a bit of electronics (1ms-30ms, that's what they write on the box as response time) plus the time it takes for the complete transmission of the line (15µs, yes microseconds, less than 1/1080 of the 16.67ms (60Hz refresh rate) it takes to transmit all 1080 lines) the last line will be delayed by the full 16.67ms plus the response time. That's simply how monitors work.

Or to be precise that's how it ideally works. For example a 30Hz interlaced video signal would get delayed by at least 33.33ms(first line)/50ms(last line) + x if not 66.67ms/83.33ms + x because 2 frames have to be buffered to deinterlace the signal (1 complete 30Hz frame transmission + response time + a bit of calculation time if it starts to calculate the new frame once the first line of the second frame starts transmitting, otherwise 2 full 30Hz frames delay).
Another good example is scaling. Yes, lower, non-native resolutions actually add latency, not reduce it. Best case the scaler starts to calculate once a couple of lines have been transmitted so you just get the few milliseconds (or even less if it's a good hardware scaler) it takes the scaler to calculate the new lines. The screen can't refresh any faster so the last line is still delayed by the 16.67ms it takes for the refresh cycle to get there plus the added latency of the scaler. Average case: Instead of having to wait for new lines every couple of lines the scaler buffers a full frame and then starts to work. So you get the scaler latency and the e.g. 4ms for 800x600 it takes to transmit one frame on top of the usual stuff. Worst case same thing as average except the frames get transmitted with just enough bandwidth to get 60Hz so every frame takes 16.67ms + scaler + the usual stuff. So you went from response time for the first line and response time + 16.67ms for the last line to response + scaler + 16.67ms for first and response + scaler + 33.33ms for last. Congratulations, you just more than doubled your input lag.

tl;dr
Don't use non-native resolutions on LCDs.
It only reduces input lag on CRTs and only if you can increase the refresh rate because of the lower resolution.

Bonus: The GPU can't just randomly start to send frames to the monitor, it needs to be synced with the monitor's refreshes. Games don't give a fuck about the monitors so the only way the GPU can solve the problem is by waiting until the monitor requests a frame. And if that happens 0.01ms before a new frame starts drawing then so be it. In that case the monitor will get a 16.66ms old frame. So another completely random 0-16.66ms lag (60fps). It's different every time the monitor is turned on or is plugged in. Even different outputs/monitors on the same GPU will have different delays. That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay. Triple buffered VSYNC also eliminates the lag from the fps cap. So whenever you don't have to cater to wonky engines with the fps cap and if it's available, provided you have sufficient VRAM, use triple buffered VSYNC.

I forgot to mention, transfer time is irrelevant, other than limiting the refresh rate. Any low input lag monitor should be using progressive scan as in the first horizontal line is transmitted and immediately displayed (well the pixels still take a few milliseconds to change, but they start changing at that time). Right after that the next line will finish transmission and get displayed and so on. And after the bottom line starts refreshing the whole thing starts over at the top. No delay. So while the first line is only delayed by a bit of electronics (1ms-30ms, that's what they write on the box as response time) plus the time it takes for the complete transmission of the line (15µs, yes microseconds, less than 1/1080 of the 16.67ms (60Hz refresh rate) it takes to transmit all 1080 lines) the last line will be delayed by the full 16.67ms plus the response time. That's simply how monitors work.

Or to be precise that's how it ideally works. For example a 30Hz interlaced video signal would get delayed by at least 33.33ms(first line)/50ms(last line) + x if not 66.67ms/83.33ms + x because 2 frames have to be buffered to deinterlace the signal (1 complete 30Hz frame transmission + response time + a bit of calculation time if it starts to calculate the new frame once the first line of the second frame starts transmitting, otherwise 2 full 30Hz frames delay).
Another good example is scaling. Yes, lower, non-native resolutions actually add latency, not reduce it. Best case the scaler starts to calculate once a couple of lines have been transmitted so you just get the few milliseconds (or even less if it's a good hardware scaler) it takes the scaler to calculate the new lines. The screen can't refresh any faster so the last line is still delayed by the 16.67ms it takes for the refresh cycle to get there plus the added latency of the scaler. Average case: Instead of having to wait for new lines every couple of lines the scaler buffers a full frame and then starts to work. So you get the scaler latency and the e.g. 4ms for 800x600 it takes to transmit one frame on top of the usual stuff. Worst case same thing as average except the frames get transmitted with just enough bandwidth to get 60Hz so every frame takes 16.67ms + scaler + the usual stuff. So you went from response time for the first line and response time + 16.67ms for the last line to response + scaler + 16.67ms for first and response + scaler + 33.33ms for last. Congratulations, you just more than doubled your input lag.

[b]tl;dr[/b]
Don't use non-native resolutions on LCDs.
It only reduces input lag on CRTs and only if you can increase the refresh rate because of the lower resolution.

Bonus: The GPU can't just randomly start to send frames to the monitor, it needs to be synced with the monitor's refreshes. Games don't give a fuck about the monitors so the only way the GPU can solve the problem is by waiting until the monitor requests a frame. And if that happens 0.01ms before a new frame starts drawing then so be it. In that case the monitor will get a 16.66ms old frame. So another completely random 0-16.66ms lag (60fps). It's different every time the monitor is turned on or is plugged in. Even different outputs/monitors on the same GPU will have different delays. That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay. Triple buffered VSYNC also eliminates the lag from the fps cap. So whenever you don't have to cater to wonky engines with the fps cap and if it's available, provided you have sufficient VRAM, use triple buffered VSYNC.
19
#19
0 Frags +

"transfer time is irrelevant"
[Proceeds to explain the cases where it's relevant]
Different LCDs are capable of different things. If you send it an extremely small frame (such as 800x600 16kcolors) then it doesn't need to wait for the entire panel worth of data to be transmitted before it has data with which it can write to the bottom row. Then the added latency is up to to the specific monitor. If it has a lightning fast rescaler and can blot every scanline together fast enough, you've saved time. I have a monitor for which this is the case. Running at 120fps 60hz 800x600x15bpp(16bpp) is noticeably more responsive than 120fpz 60hz 1680x1050x24bpp.

tl;dr: It depends on your monitor. Transfer time is still a hard limitation in addition (actually max() in the case setsul described) to whatever the panel itself is capable of.

Don't use triple buffering in DX games. DX doesn't support classical (low-latency) triple buffering. (post-DX9 might, but I wouldn't know.) http://www.emutalk.net/threads/14075-DirectX-multiple-back-buffer-question >DX can support more than 2 buffers, but the order of buffer swapping is fixed (always flipping in a circular matter), not flexible. I am wondering if someone knows a secret way to do so.

Addemendum:

http://i.imgur.com/jjE3q2K.png

"transfer time is irrelevant"
[Proceeds to explain the cases where it's relevant]
Different LCDs are capable of different things. If you send it an extremely small frame (such as 800x600 16kcolors) then it doesn't need to wait for the entire panel worth of data to be transmitted before it has data with which it can write to the bottom row. Then the added latency is up to to the specific monitor. If it has a lightning fast rescaler and can blot every scanline together fast enough, you've saved time. I have a monitor for which this is the case. Running at 120fps 60hz 800x600x15bpp(16bpp) is [i]noticeably[/i] more responsive than 120fpz 60hz 1680x1050x24bpp.

tl;dr: It depends on your monitor. Transfer time is still a hard limitation in addition (actually max() in the case setsul described) to whatever the panel itself is capable of.

Don't use triple buffering in DX games. DX doesn't support classical (low-latency) triple buffering. (post-DX9 might, but I wouldn't know.) http://www.emutalk.net/threads/14075-DirectX-multiple-back-buffer-question >DX can support more than 2 buffers, but the order of buffer swapping is fixed (always flipping in a circular matter), not flexible. I am wondering if someone knows a secret way to do so.

Addemendum:
[url=http://i.imgur.com/jjE3q2K.png][img]http://i.imgur.com/jjE3q2K.png[/img][/url]
20
#20
2 Frags +
BentomatAnyone feel like writing a defense of maxing out framerates in multiplayer games? I've been reading a lot of arguments recently that there's no point in running a framerate higher than 60 if you have a 60hz monitor. As far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.

Many lengthy explanations here that are TLDR.

Yes running a 60Hz monitor you can only see 60 FPS, however if you limit your game FPS to 60, your mouse movement and accuracy will suffer. In my opinion I prefer leaving the game at default 299 cap. The more stable your FPS, the more stable your mouse movement in TF2 will be. If you fluctuate from say 100 FPS at one moment to 300 FPS the next, your tracking and aim will suffer.

Oh and those talking about VSYNC in TF2 or any source engine game. Forget about it. Have you actually tried it? The increase input lag is insane. I tried VSYNC one day, and it looked nice but I literally could no longer rocket jump properly. It was that bad.

[quote=Bentomat]Anyone feel like writing a defense of maxing out framerates in multiplayer games? I've been reading a lot of arguments recently that there's no point in running a framerate higher than 60 if you have a 60hz monitor. As far as I know the TF2 community is the only one that really cares about these things enough to use low-end graphics settings on a mid-high end rig.[/quote]

Many lengthy explanations here that are TLDR.

Yes running a 60Hz monitor you can only see 60 FPS, however if you limit your game FPS to 60, your mouse movement and accuracy will suffer. In my opinion I prefer leaving the game at default 299 cap. The more stable your FPS, the more stable your mouse movement in TF2 will be. If you fluctuate from say 100 FPS at one moment to 300 FPS the next, your tracking and aim will suffer.

Oh and those talking about VSYNC in TF2 or any source engine game. Forget about it. Have you actually tried it? The increase input lag is insane. I tried VSYNC one day, and it looked nice but I literally could no longer rocket jump properly. It was that bad.
21
#21
6 Frags +

Transfer time is irrelevant unless your monitor has horrible input lag in that case you could "improve" your situation and go from horrible input lag to less but still horrible input lag plus horrible picture quality.
Progressive scan -> doesn't matter, limited by refresh rate
Buffered -> two options:
1. bad monitor in terms of latency (most likely since it's using a buffer so latency clearly isn't a concern) -> transmit at default speed, transfer time is set by refresh rate (identical to progressive scan, except you got one frametime of lag added)
2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.

Progressive -> transfer time is irrelevant.
1. -> transfer time is irrelevant
2. -> transfer time in theory relevant. In practice only very bad monitors do that so you'll probably get another 30ms response time on top of that so you're already fucked in terms of input lag.

-> You want progressive scan and in that case transfer time is set by the refresh rate, no cable or transfer protocol is going to change that, best it could do is reduce the overhead time. In addition to that it'll only work if the monitor's inputs are capable of higher speeds than what the monitor actually needs.

I'll show you with my awesome MS Paint skills a bit further down.

DX9 does support triple buffering. It can't drop frames, that's the problem. And it's only a problem with much higher fps than the refresh rate so the added input lag is a fraction of the refresh rate. The pre-rendered frames queue adds more input lag, reducing its length helps more.
Also let's keep in mind that DX9 is over 12 years old. That's not exactly state of the art anymore.
http://www.gamedev.net/topic/649174-why-directx-doesnt-implement-triple-buffering/
http://gamedev.stackexchange.com/questions/58481/does-directx-implement-triple-buffering

wareyaIf you send it an extremely small frame (such as 800x600 16kcolors) then it doesn't need to wait for the entire panel worth of data to be transmitted before it has data with which it can write to the bottom row.

1. 15bpp is 32k and 16bpp is 64k.
2. Your eyes are pretty much the worst measurement tool except for random guessing.
3. You've mixed up two cases.
If the scaler waits for a complete frame to be buffered transfer time matters, but it can't start immediately.
If the scaler starts immediately transfer time doesn't matter.

The bottom row is the last one to be transmitted. Obviously it has to be transmitted before it can be displayed. Per definition if the last row has been transmitted the whole frame has been transmitted. Also this is exactly what I talked about in the best case. The scaler can start working with the first transmitted line so the only the scaler latency is added. Given sufficient bandwidth the scaler could finish the frame only slightly after the transmission has been completed. You're right in that respect. However this isn't the limitation. If the display is running at max refresh rate going from the top to the bottom row during a refresh takes pretty exactly 1/[refresh rate] seconds. Example progressive scan (ignoring response time): 1ms scaler latency, 1920x1080 16.67ms transfer time, 800x600 3.8ms transfer time, 16.67ms refresh time.
No scaler: frame transmission starts at 0ms, refresh starts at 0ms. Transmission ends at 16.67ms, refresh ends at 16.67ms. Next refresh from 16.67ms to 33.33ms and so on.
Scaler: Frame transmission starts at 0ms, scaler starts at 0ms. Scaler finishes first line at 1ms, refresh starts at 1ms. Transmission ends at 3.8ms. Scaler finishes last line at 4.8ms. Next transmission starts at 16.67ms, scaler starts at 16.67ms. Refresh ends at 17.67ms, scaler finishes first line at 17.67ms. New refresh starts at 17.67ms.
See the problem? Refresh time is the limit. All it did was add the scaler latency.
Now an example with a buffered scaler where transfer time matters (T=transmission, S=scaler, R=refresh):
T starts at 0ms. T ends at 3.8ms. S starts at 3.8ms. S finishes first line at 4.8ms, R starts at 4.8ms. S finishes last line somewhere between 4.8ms and 8.6ms. New T starts at 16.67ms. T ends at 20.47ms. S starts at 20.47ms. S finishes first line at 21.47ms, R ends at 21.47ms, new R starts at 21.47ms.
Now it's scaler latency + transfer time. This is only an improvement if scaler latency + small res transfer time is less than large res transfer time and if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution. Is it a TV?

Bottom line is that a scaler can't save time. It can't speed up the refresh itself. Even if the refresh only takes e.g. 13.33ms (75Hz) physically on the screen so transfer time is the limiting factor lowering the resolution only improves things marginally. Yes the average display lag is reduced by 1.167ms but there will still only be one frame transmitted every 16.67ms. If the next frame has to wait 5ms for it's transmission then that won't change at all. What you want to do in that case is overclock that monitor and that's when transfer time actually matters. If the cable/output/input/protocol/whatever don't have sufficient bandwidth to keep the transfer time below the refresh time you have to drop the resolution. Because even with the 1ms scaler latency for example for that 5ms delayed frame you gain another 3.33ms (oc to 75Hz) or a total of 4.5ms reduction.

[talks about triple buffered vsync]
[posts picture about normal vsync]

wareyaAddemendum

Did you mean addendum?

http://i.imgur.com/K8mTUek.png

Black: Sampled input with constant vsync
Light Blue: vsync delay + rendering
Dark Green: waiting for display + actual refresh
Yellow: Random input delay (earlier inputs more delay, later less)
Red: Constant delay through rendering + waiting.
Light Green: Random refresh delay (top line appears first, no delay, bottom line last, full delay)
Brown: Same frame displayed twice.
Dark Blue: Rendering

You see the first missed refresh? That one is unavoidable. You'll have to live with one added refresh time of input lag, even with uncapped fps.
The problem is the second one. Because the next frame doesn't start rendering until the refresh starts you get the input over two full refreshes instead of around one and all of that data is delayed by another refresh if the frame doesn't render in time. And those refreshes worth of lag keep going for as long as the fps are below the refresh rate. Triple buffering avoids that because the next frame already starts rendering while the previous one is waiting (pink) because it missed the refresh. That reduces the added input lag to the normal one refresh time that you get when the fps are below the refresh rate.
And that holds true for DX triple buffering aswell. The only difference occurs when the fps spike to more than twice the refresh rate. DX won't render a third frame until the last refresh finishes and that adds some input lag. However it might actually feel smoother because on DX the average input lag, while higher, stays pretty much the same, whereas on normal triple buffering it drops in the next frame, then spikes up again and then drops again and so on. Well in the picture it doesn't drop because the fps drop pretty harshly, instead it increases less, on more stable fps it would drop, but you get the idea, the variation increases.
You can both get DX to behave more normal (reduce input lag) and get normal triple buffering to behave more like DX (smooth input lag) by using an fps cap. In fact thanks to statistics DX can actually have less input lag than normal triple buffering.

Continued in next post.

Transfer time is irrelevant unless your monitor has horrible input lag in that case you could "improve" your situation and go from horrible input lag to less but still horrible input lag plus horrible picture quality.
Progressive scan -> doesn't matter, limited by refresh rate
Buffered -> two options:
1. bad monitor in terms of latency (most likely since it's using a buffer so latency clearly isn't a concern) -> transmit at default speed, transfer time is set by refresh rate (identical to progressive scan, except you got one frametime of lag added)
2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.

Progressive -> transfer time is irrelevant.
1. -> transfer time is irrelevant
2. -> transfer time in theory relevant. In practice only very bad monitors do that so you'll probably get another 30ms response time on top of that so you're already fucked in terms of input lag.

-> You want progressive scan and in that case transfer time is set by the refresh rate, no cable or transfer protocol is going to change that, best it could do is reduce the overhead time. In addition to that it'll only work if the monitor's inputs are capable of higher speeds than what the monitor actually needs.

I'll show you with my awesome MS Paint skills a bit further down.

DX9 does support triple buffering. It can't drop frames, that's the problem. And it's only a problem with much higher fps than the refresh rate so the added input lag is a fraction of the refresh rate. The pre-rendered frames queue adds more input lag, reducing its length helps more.
Also let's keep in mind that DX9 is over 12 years old. That's not exactly state of the art anymore.
http://www.gamedev.net/topic/649174-why-directx-doesnt-implement-triple-buffering/
http://gamedev.stackexchange.com/questions/58481/does-directx-implement-triple-buffering

[quote=wareya]If you send it an extremely small frame (such as 800x600 16kcolors) then it doesn't need to wait for the entire panel worth of data to be transmitted before it has data with which it can write to the bottom row.[/quote]
1. 15bpp is 32k and 16bpp is 64k.
2. Your eyes are pretty much the worst measurement tool except for random guessing.
3. You've mixed up two cases.
If the scaler waits for a complete frame to be buffered transfer time matters, but it can't start immediately.
If the scaler starts immediately transfer time doesn't matter.

The bottom row is the last one to be transmitted. Obviously it has to be transmitted before it can be displayed. Per definition if the last row has been transmitted the whole frame has been transmitted. Also this is exactly what I talked about in the best case. The scaler can start working with the first transmitted line so the only the scaler latency is added. Given sufficient bandwidth the scaler could finish the frame only slightly after the transmission has been completed. You're right in that respect. However this isn't the limitation. If the display is running at max refresh rate going from the top to the bottom row during a refresh takes pretty exactly 1/[refresh rate] seconds. Example progressive scan (ignoring response time): 1ms scaler latency, 1920x1080 16.67ms transfer time, 800x600 3.8ms transfer time, 16.67ms refresh time.
No scaler: frame transmission starts at 0ms, refresh starts at 0ms. Transmission ends at 16.67ms, refresh ends at 16.67ms. Next refresh from 16.67ms to 33.33ms and so on.
Scaler: Frame transmission starts at 0ms, scaler starts at 0ms. Scaler finishes first line at 1ms, refresh starts at 1ms. Transmission ends at 3.8ms. Scaler finishes last line at 4.8ms. Next transmission starts at 16.67ms, scaler starts at 16.67ms. Refresh ends at 17.67ms, scaler finishes first line at 17.67ms. New refresh starts at 17.67ms.
See the problem? Refresh time is the limit. All it did was add the scaler latency.
Now an example with a buffered scaler where transfer time matters (T=transmission, S=scaler, R=refresh):
T starts at 0ms. T ends at 3.8ms. S starts at 3.8ms. S finishes first line at 4.8ms, R starts at 4.8ms. S finishes last line somewhere between 4.8ms and 8.6ms. New T starts at 16.67ms. T ends at 20.47ms. S starts at 20.47ms. S finishes first line at 21.47ms, R ends at 21.47ms, new R starts at 21.47ms.
Now it's scaler latency + transfer time. This is only an improvement if scaler latency + small res transfer time is less than large res transfer time [b]and[/b] if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution. Is it a TV?

Bottom line is that a scaler can't save time. It can't speed up the refresh itself. Even if the refresh only takes e.g. 13.33ms (75Hz) physically on the screen so transfer time is the limiting factor lowering the resolution only improves things marginally. Yes the average display lag is reduced by 1.167ms but there will still only be one frame transmitted every 16.67ms. If the next frame has to wait 5ms for it's transmission then that won't change at all. What you want to do in that case is overclock that monitor and that's when transfer time actually matters. If the cable/output/input/protocol/whatever don't have sufficient bandwidth to keep the transfer time below the refresh time you have to drop the resolution. Because even with the 1ms scaler latency for example for that 5ms delayed frame you gain another 3.33ms (oc to 75Hz) or a total of 4.5ms reduction.

[talks about triple buffered vsync]
[posts picture about normal vsync]

[quote=wareya]Addemendum
[/quote]
Did you mean addendum?

[img]http://i.imgur.com/K8mTUek.png[/img]

Black: Sampled input with constant vsync
Light Blue: vsync delay + rendering
Dark Green: waiting for display + actual refresh
Yellow: Random input delay (earlier inputs more delay, later less)
Red: Constant delay through rendering + waiting.
Light Green: Random refresh delay (top line appears first, no delay, bottom line last, full delay)
Brown: Same frame displayed twice.
Dark Blue: Rendering

You see the first missed refresh? That one is unavoidable. You'll have to live with one added refresh time of input lag, even with uncapped fps.
The problem is the second one. Because the next frame doesn't start rendering until the refresh starts you get the input over two full refreshes instead of around one and all of that data is delayed by another refresh if the frame doesn't render in time. And those refreshes worth of lag keep going for as long as the fps are below the refresh rate. Triple buffering avoids that because the next frame already starts rendering while the previous one is waiting (pink) because it missed the refresh. That reduces the added input lag to the normal one refresh time that you get when the fps are below the refresh rate.
And that holds true for DX triple buffering aswell. The only difference occurs when the fps spike to more than twice the refresh rate. DX won't render a third frame until the last refresh finishes and that adds some input lag. However it might actually feel smoother because on DX the average input lag, while higher, stays pretty much the same, whereas on normal triple buffering it drops in the next frame, then spikes up again and then drops again and so on. Well in the picture it doesn't drop because the fps drop pretty harshly, instead it increases less, on more stable fps it would drop, but you get the idea, the variation increases.
You can both get DX to behave more normal (reduce input lag) and get normal triple buffering to behave more like DX (smooth input lag) by using an fps cap. In fact thanks to statistics DX can actually have less input lag than normal triple buffering.

Continued in next post.
22
#22
0 Frags +

E.g. cap at 2 times the refresh rate, fps just below 3 times the refresh rate, frame starts rendering in sync with the refresh. Normal renders 2 frames until the next refresh, the 2nd starting a bit less than 2/3 of the refresh time before the next one. The third frame barely doesn't get rendered in time.
DX renders 2 frames aswell, but the second only starts rendering 1/2 after the previous/before the next refresh because of the cap. Almost 1/6 refresh time less input lag, PRAISE SCIENCE!

#20

Setsul... So whenever you don't have to cater to wonky engines with the fps cap ...

TF2 is safely in the wonky engine category, sorry if I didn't make that clear.

E.g. cap at 2 times the refresh rate, fps just below 3 times the refresh rate, frame starts rendering in sync with the refresh. Normal renders 2 frames until the next refresh, the 2nd starting a bit less than 2/3 of the refresh time before the next one. The third frame barely doesn't get rendered in time.
DX renders 2 frames aswell, but the second only starts rendering 1/2 after the previous/before the next refresh because of the cap. Almost 1/6 refresh time less input lag, PRAISE SCIENCE!

#20
[quote=Setsul]... So whenever you don't have to cater to wonky engines with the fps cap ...[/quote]
TF2 is safely in the wonky engine category, sorry if I didn't make that clear.
23
#23
0 Frags +

tf2 servers are 66tic so theres that too

tf2 servers are 66tic so theres that too
24
#24
0 Frags +

>Progressive scan -> doesn't matter, limited by refresh rate
Limited by whichever is slower between your panel and the connection. Not some magic number that's the same between monitors.

>2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.
Let's say I have a monitor where it takes 1/90th of a second for it to blit the whole panel once it has its buffer, and it takes a millisecond to resize the buffer in its internal RAM. That's ~12 ms to resize and display after getting the image. If it normally takes 1/60th of a second for it to get a 1080p24 image, then getting a 800x600x15(16) image should take just over 1/6th of that time; we're already only up to 15ms to send, resize, and display the low resolution image.

For people who have *very good monitors* but do *not* have the right cables to go with them, this is a perfectly legitimate option. These are people that exist.

>DX9 does support triple buffering.

It has three buffers, but it's a queue. Hence "the order of buffer swapping is fixed". I don't see what's wrong here. Have fun with your two frames of forced input lag :)

API edge cases that drivers don't implement properly beware! Hail Vulkan!

>3. You've mixed up two cases.
Nearly every scaler is the full buffer case. I very well am aware of the others, but the fact that slow connections can make even fully buffered low-resolution images make sense is already the "worst" case.

>This is only an improvement if scaler latency + small res transfer time is less than large res transfer time and if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution.

Again, these situations exist. It's not unicorns we're talking about.

>Is it a TV?

God, no. Throw all latency worries out the window.

>Did you mean addendum?

Everyone in my house says addemendum, so that's the word to me.

>[posts picture about normal vsync]

You would notice that's actually triple buffered vsync if you understood the dropped frame. Normally, sending and rendering the latest frame can't be done at the same time. That's one of the reasons that triple buffering exists.

(The fact that there are triple buffering implementations that don't involve timing sync on the gaming engine is off-topic. I was just showing the best case for DX vsync.)

>Re:image

You have no idea what you just did to that image. Come back after you write an emulator for 3d gaming hardware.

>Normal vsync line is completely wrong
>triple buffer missing refreshes misinterpreted this, move on
>"You'll have to live with one added refresh time of input lag, even with uncapped fps."
to completely ignore the fact that DX vsync starts rendering the next frame as early as possible instead of as late as possible regardless of how much performance the game could actually theoretically squeeze out of it
>DX triple + cap @ 2x refresh
u wot m8? Why the fuck would you make DX triple-buffer without vsync, and if that's not what you're doing, how the hell do you make DX vsync to "2xrefresh"? The only thing I can even think of is playing the game in a window without an FPS cap with Aero enabled. That's two extra layers of indirection where literally anything could go wrong latency-wise.

>Progressive scan -> doesn't matter, limited by refresh rate
Limited by whichever is slower between your panel and the connection. Not some magic number that's the same between monitors.

>2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.
Let's say I have a monitor where it takes 1/90th of a second for it to blit the whole panel once it has its buffer, and it takes a millisecond to resize the buffer in its internal RAM. That's ~12 ms to resize and display after getting the image. If it normally takes 1/60th of a second for it to get a 1080p24 image, then getting a 800x600x15(16) image should take just over 1/6th of that time; we're already only up to 15ms to send, resize, and display the low resolution image.

For people who have *very good monitors* but do *not* have the right cables to go with them, this is a perfectly legitimate option. [i]These are people that exist.[/i]

>DX9 does support triple buffering.

It has three buffers, but it's a queue. Hence "the order of buffer swapping is fixed". I don't see what's wrong here. Have fun with your two frames of forced input lag :)

API edge cases that drivers don't implement properly beware! Hail Vulkan!

>3. You've mixed up two cases.
Nearly every scaler is the full buffer case. I very well am aware of the others, but the fact that slow connections can make even fully buffered low-resolution images make sense is already the "worst" case.

>This is only an improvement if scaler latency + small res transfer time is less than large res transfer time and if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution.

Again, these situations exist. It's not unicorns we're talking about.

>Is it a TV?

God, no. Throw all latency worries out the window.

>Did you mean addendum?

Everyone in my house says addemendum, so that's the word to me.

>[posts picture about normal vsync]

You would notice that's actually triple buffered vsync if you understood the dropped frame. Normally, sending and rendering the latest frame can't be done at the same time. That's one of the reasons that triple buffering exists.

(The fact that there are triple buffering implementations that don't involve timing sync on the gaming engine is off-topic. I was just showing the best case for DX vsync.)

>Re:image

You have no idea what you just did to that image. Come back after you write an emulator for 3d gaming hardware.

>Normal vsync line is completely wrong
[s]>triple buffer missing refreshes[/s] misinterpreted this, move on
>"You'll have to live with one added refresh time of input lag, even with uncapped fps."
[i]to completely ignore the fact that DX vsync starts rendering the next frame as early as possible instead of as late as possible regardless of how much performance the game could actually theoretically squeeze out of it[/i]
>DX triple + cap @ 2x refresh
u wot m8? Why the [b]fuck[/b] would you make DX triple-buffer without vsync, and if that's not what you're doing, how the hell do you make DX vsync to "2xrefresh"? The only thing I can even think of is playing the game in a window without an FPS cap with Aero enabled. That's two extra layers of indirection where literally anything could go wrong latency-wise.
25
#25
0 Frags +

transfer time < refresh time, per definition, otherwise you wouldn't be able to keep the refresh rate up.

I'll stop arguing about shitty monitors and shitty cables but consider this:
Either the input lag is reduced slightly, but the image quality suffers or both get worse. And most people have no way of predicting or knowing which will be the case for them. But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.
If you're only concerned about people with "*very good monitors*" but bad cables you could just recommend the following:
What standard are the monitors inputs rated for?
What standard is the cable rated for?
If the cable is rated for less bandwidth then get a new one for 3$. I mean they could afford a "*very good monitor*" (that surprisingly didn't come with a cable) they should be able to save up for 3 months and buy a cable for 3$.

Fixed queue doesn't mean triple buffered vsync doesn't exist. Fixed swapping order doesn't mean you can't skip frames. There's copy, you know.

Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now. If you want to make up your own language go ahead, but don't expect anyone to understand it.

With triple buffering refreshes and rendering are independent, that's the whole point of it. Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.

Nice argument from authority.

Didn't ignore that, the gaps are cosmetic.

I was to lazy to write DirectX triple buffered VSYNC. In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.

And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore. DX11 is over 5 years old and has perfectly normal triple buffered VSYNC.

transfer time < refresh time, per definition, otherwise you wouldn't be able to keep the refresh rate up.

I'll stop arguing about shitty monitors and shitty cables but consider this:
Either the input lag is reduced slightly, but the image quality suffers or both get worse. And most people have no way of predicting or knowing which will be the case for them. But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.
If you're only concerned about people with "*very good monitors*" but bad cables you could just recommend the following:
What standard are the monitors inputs rated for?
What standard is the cable rated for?
If the cable is rated for less bandwidth then get a new one for 3$. I mean they could afford a "*very good monitor*" (that surprisingly didn't come with a cable) they should be able to save up for 3 months and buy a cable for 3$.

Fixed queue doesn't mean triple buffered vsync doesn't exist. Fixed swapping order doesn't mean you can't skip frames. There's copy, you know.

Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now. If you want to make up your own language go ahead, but don't expect anyone to understand it.

With triple buffering refreshes and rendering are independent, that's the whole point of it. Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.

Nice argument from authority.

Didn't ignore that, the gaps are cosmetic.

I was to lazy to write DirectX triple buffered VSYNC. In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.

And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore. DX11 is over 5 years old and has perfectly normal triple buffered VSYNC.
26
#26
0 Frags +

>But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.

No, I said that if you have a shitty connection to a great monitor, it can help, and one should try it. I didn't recommend everyone switch to 800x600 just because. You completely misunderstand me.

>Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now.

/badlinguistics/

spelling != words, different words, where did the word "romance" come from?

>Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.

This is literally the fundamental "flaw" with modern vsync that I was showing. Old games consoles don't have this problem because they can program game logic in sync with the display. You can't do that with modern games because you have no idea how long a frame is going to last. If you run at a high framerate without vsync, you generate frames with inputs that are halfway closer to the point they're displayed. The point was to counter this:

"That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay."

With vsync, you're making the delay *always* be 16ms, when it's "on average" (actually depends on the rendering time) 8 otherwise. In a perfect world, it would be different.

Apologies for not connecting points well, but I thought order of inclusion would be enough.

>Nice argument from authority.

No, it was advice. There was no argument there.

>In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.

http://i.imgur.com/HXXI3ty.png

It would be so easy to eliminate tearing and "extra lag" if we just constantly rendered to one of three buffers OGL style, but nobody actually does that.

>And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore.

It would be nice if we didn't have any DX9-only games, but we do, and vsync buffering is almost never implemented properly.

>But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.

No, I said that if you have a shitty connection to a great monitor, it can help, and one should try it. I didn't recommend everyone switch to 800x600 just because. You completely misunderstand me.

>Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now.

/badlinguistics/

spelling != words, different words, where did the word "romance" come from?

>Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.

This is literally the fundamental "flaw" with modern vsync that I was showing. Old games consoles don't have this problem because they can program game logic in sync with the display. You can't do that with modern games because you have no idea how long a frame is going to last. If you run at a high framerate without vsync, you generate frames with inputs that are halfway closer to the point they're displayed. The point was to counter this:

"That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay."

With vsync, you're making the delay *always* be 16ms, when it's "on average" (actually depends on the rendering time) 8 otherwise. In a perfect world, it would be different.

Apologies for not connecting points well, but I thought order of inclusion would be enough.

>Nice argument from authority.

No, it was advice. There was no argument there.

>In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.

http://i.imgur.com/HXXI3ty.png

It would be so easy to eliminate tearing and "extra lag" if we just constantly rendered to one of three buffers OGL style, but nobody [i]actually[/i] does that.

>And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore.

It would be nice if we didn't have any DX9-only games, but we do, and vsync buffering is almost [i]never[/i] implemented properly.
27
#27
4 Frags +

nerd fight!

http://31.media.tumblr.com/a85dcca5f97ff959cd13d8a39a001800/tumblr_n8bn2sW4151ty4oouo1_500.gif

nerd fight!

[img]http://31.media.tumblr.com/a85dcca5f97ff959cd13d8a39a001800/tumblr_n8bn2sW4151ty4oouo1_500.gif[/img]
28
#28
0 Frags +

Still can't verify wether or not it helped and that's a real problem.

I don't care about english ethymology, but tell me, where do you think "addemendum" comes from? I'm 90% sure that you fucked up the already wrong spelling "addemdum", not that somehow "addemendere" which for some reason seems to mean the same as addere was added to latin while I wasn't looking.

I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.

Uncapped the input lag is lower, a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop. Example: 2 frames, one rendered with VSYNC on, one with VSYNC off but the fps capped to 60. Both Get displayed on refresh 2, now when did they start rendering? The VSYNC one started at the time of refresh 1, the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0. The closer the render time is to the refresh time the more likely the worst case becomes, in fact it will happen at some point. And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2. Now you've got more input lag than with VSYNC until an fps drop occurs, because refreshes and rendering are locked out of phase with the same period.

This is not an argument for normal VSYNC, this is an argument against capping at the refresh rate, not related to our argument, that's why I put it under "bonus".
What I was trying to say there was: normal VSYNC = bad, capping at refresh rate = worse than VSYNC, therefore capping at refresh rate = really bad.

Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.

Still can't verify wether or not it helped and that's a real problem.

I don't care about english ethymology, but tell me, where do you think "addemendum" comes from? I'm 90% sure that you fucked up the already wrong spelling "addemdum", not that somehow "addemendere" which for some reason seems to mean the same as addere was added to latin while I wasn't looking.

I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.

Uncapped the input lag is lower, a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop. Example: 2 frames, one rendered with VSYNC on, one with VSYNC off but the fps capped to 60. Both Get displayed on refresh 2, now when did they start rendering? The VSYNC one started at the time of refresh 1, the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0. The closer the render time is to the refresh time the more likely the worst case becomes, in fact it will happen at some point. And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2. Now you've got more input lag than with VSYNC until an fps drop occurs, because refreshes and rendering are locked out of phase with the same period.

[b]This is not an argument for normal VSYNC, this is an argument against capping at the refresh rate, not related to our argument, that's why I put it under "bonus".
What I was trying to say there was: normal VSYNC = bad, capping at refresh rate = worse than VSYNC, therefore capping at refresh rate = really bad.[/b]

Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.
29
#29
0 Frags +

>Still can't verify wether or not it helped and that's a real problem.

Maybe you can't, but I can just go play at fullscreen 800x600 at any time for lower latency at the same framerate. It's not that hard. It's like the difference of a 100hz monitor, minus the extra smoothness.

>addemendum

Corruption of addendum; so?

>I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.

Triple buffered vsync under DX still starts rendering on the sync point. It doesn't overcome this basic restriction. Show me a DX9-10 game that doesn't behave like this in vsync.

>a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop.

No, a cap at or just above the refresh rate has unstable input lag, not higher input lag. Only when you dip below native hz does non-synced input lag become higher due to buffer clash. When you're running at or above native hz, there are points in time where the frames will start to render further into the future than with vsync, resulting in less lag. The higher the framerate the game is capable of, the better.

>the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0.

If it started to render before refresh 1 and displayed on refresh 2 when you're running at below native fps.

>And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2.

You know what caps dependent on monitor vertical phase are called? "Vsync". You know what kind isn't? The kind we're talking about when we say cap. The cap won't delay the next frame until after refresh 2, it'll delay the next frame until 1/caprate after the previous render started; if the render lasts longer than 1/caprate, that doesn't make it take 2/caprate. It just starts overwriting the frame it's already built and starts causing tearing. (note: this is a vast oversimplification; point being that "it works out")

>Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.

Then you get the nightmare that is the 5770 driver. Thanks, Microsoft!

>Still can't verify wether or not it helped and that's a real problem.

Maybe you can't, but I can just go play at fullscreen 800x600 at any time for lower latency at the same framerate. It's not that hard. It's like the difference of a 100hz monitor, minus the extra smoothness.

>addemendum

Corruption of addendum; so?

>I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.

Triple buffered vsync under DX still starts rendering on the sync point. It doesn't overcome this basic restriction. Show me a DX9-10 game that doesn't behave like this in vsync.

>a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop.

No, a cap at or just above the refresh rate has unstable input lag, not higher input lag. Only when you dip below native hz does non-synced input lag become higher due to buffer clash. When you're running at or above native hz, there are points in time where the frames will start to render further into the future than with vsync, resulting in less lag. The higher the framerate the game is capable of, the better.

>the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0.

If it started to render before refresh 1 and displayed on refresh 2 when you're running at below native fps.

>And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2.

You know what caps dependent on monitor vertical phase are called? "Vsync". You know what kind isn't? The kind we're talking about when we say cap. The cap won't delay the next frame until after refresh 2, it'll delay the next frame until 1/caprate after the previous render started; if the render lasts longer than 1/caprate, that doesn't make it take 2/caprate. It just starts overwriting the frame it's already built and starts causing tearing. (note: this is a vast oversimplification; point being that "it works out")

>Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.

Then you get the nightmare that is the 5770 driver. Thanks, Microsoft!
30
#30
0 Frags +

And how do you know it's lower latency? And no "I can clearly see it" doesn't count. Your brain is way too biased to be objective.

Corruption is the best excuse I've ever heard for a spelling error. Stuff like that really doesn't make you look good in a paper.

cap = refresh rate therefore
1/cap = 1/refresh rate.
So if a render takes 1-y and render 0 started at 0+x, the next render won't start until 1+x and won't finish until 2+x-y. If x > y that's after refresh 2.
Up until the tearing you get the higher input lag. Now tell me that having part of the frame with higher input lag and part of it with lower input lag and having screen tearing is good and feels smooth and keep a straight face while saying it.

But let's get to the best part shall we? There might not be any tearing. Someone might be using a low resolution and in that case the transfer might be finished before the new frame finishes rendering. No tearing and you get the higher input lag for the full frame. Worst case y approaches 0, transfer time = x, input lag = 2-x.

See what I mean? If you don't have the means to verify the result using something that might help or might have the opposite effect is stupid.

And how do you know it's lower latency? And no "I can clearly see it" doesn't count. Your brain is way too biased to be objective.

Corruption is the best excuse I've ever heard for a spelling error. Stuff like that really doesn't make you look good in a paper.

cap = refresh rate therefore
1/cap = 1/refresh rate.
So if a render takes 1-y and render 0 started at 0+x, the next render won't start until 1+x and won't finish until 2+x-y. If x > y that's after refresh 2.
Up until the tearing you get the higher input lag. Now tell me that having part of the frame with higher input lag and part of it with lower input lag and having screen tearing is good and feels smooth and keep a straight face while saying it.

But let's get to the best part shall we? There might not be any tearing. Someone might be using a low resolution and in that case the transfer might be finished before the new frame finishes rendering. No tearing and you get the higher input lag for the full frame. Worst case y approaches 0, transfer time = x, input lag = 2-x.

See what I mean? If you don't have the means to verify the result using something that might help or might have the opposite effect is stupid.
1 2
Please sign in through STEAM to post a comment.