Setsul
Account Details
SteamID64 76561198042353207
SteamID3 [U:1:82087479]
SteamID32 STEAM_0:1:41043739
Country Germany
Signed Up December 16, 2012
Last Posted April 26, 2024 at 5:56 AM
Posts 3425 (0.8 per day)
Game Settings
In-game Sensitivity
Windows Sensitivity
Raw Input  
DPI
 
Resolution
 
Refresh Rate
 
Hardware Peripherals
Mouse  
Keyboard  
Mousepad  
Headphones  
Monitor  
1 ⋅⋅ 191 192 193 194 195 196 197 ⋅⋅ 229
#25 low fps on tf2 with gtx 970 in Q/A Help

#16
FX vs Ivy Bridge i5, IB wins like 7/10. Justifies the higher price. Streaming is part of the 3/10 where the FX wins.
FX vs Haswell i5, Haswell wins 9/10. The FX can still win in very specific scenarios but in the real world it won't happen nearly enough to matter.

hookyI thought the FX-8350 was actually really good for streaming?hookysince AMD was barely competition even then, so I can't blame myself for not knowing.

Really good + barely competition = does not compute
I am confused.

#17
Budget?
What's your goal in terms of fps/settings?

#19

DerpyGentlemani dont have any problems with playing tf2 i have around 200+fpsScrewballI had a 8320 paired with a HD 7950 and was around 200 to 400 FPS.

Checks out.
Now try streaming and you'll see the same fps drops OP gets.
Sorry to rain on your parade.

posted about 9 years ago
#15 low fps on tf2 with gtx 970 in Q/A Help

#14
You missed the last 3 years.

i5-3570 vs FX-8350 was interesting.
i5-4570 vs FX-8350 isn't.

Why do you think AMD dropped the FX series? They were competing with flagships against Intel's midrange. The best they could come up with was an overclocked version of the same CPU. What Intel came up with (Haswell) destroyed them. As in destroyed the previous flagship vs midrange competition. We're not even talking about i7s.

posted about 9 years ago
#17 PC Parts: short questions, quick answers in Hardware

I'm a bit late to the party.

#7
Budget?
Capacity?
Capacity is really important because every series only covers a certain range and the performance differs between different sizes of the same series aswell.

#9
You're arguing like this right now:
"Cigarettes cause cancer, I smoke cigarettes but I didn't get cancer yet, therefore these cigarettes don't cause cancer."

I recommend reading from this page on.
"That's right, the 12V rail on this unit is rated at 4A above the part that supplies it."

It's like putting a 100HP engine into a car and then selling the car as 150HP. There's no way you'd get away with this. However with PSUs you can actually get away with it. It's bullshit.
This is the reason why AMD and nVidia recommend 500W PSUs for 150W GPUs. It's not like you'd need more than 300W it's just that the worst case for a 500W PSU is that it's actually a 300W PSU. If they only recommended a 300W PSU they'd get blamed for all the problems with 300W PSUs that are actually <200W PSUs.

#12
You need neither G-Sync nor FreeSync.
Any GPU will work with pretty much any monitor.
G-Sync and FreeSync are additional features that adress screen tearing (think VSYNC without the horrible input lag).
It's the other way round, if you want to use G-Sync you need an nVidia card, if you want to use FreeSync you need an AMD card (while FreeSync is in theory free, nVidia doesn't plan on supporting it).

However since both G-Sync and FreeSync offer pretty much the same features your GPU choice doesn't matter if you don't already own a monitor that supports one of them. Just buy whatever GPU is best for you and if you decide you want G-Sync/FreeSync later just buy a monitor that supports whatever your GPU supports.

The exception would be if you want a specific monitor that supports only one of the two and there's no other monitor with the same features.

#13
Quantity or size?

Speed is basically irrelevant, the average use won't be limited by bandwidth (especially when running dual channel).
What you want is low latency. The problem is that latency isn't given as a fixed number, instead it depends on the timings (e.g. CL9) and the speed. Lower timings -> lower latency, higher speed -> lower latency. In theory the lower the better, but generally the sweet spot is 1600MHz CL9. 1600MHz is supported by all motherboards, some go to 1866MHz, but above that you're looking at overclocking motherboards and the premium for those is much better spent on the CPU (unless you're overclocking anyway, in that case bring the 2400MHz RAM). CL9 is pretty much the lowest you can get for 1600MHz, anything lower and both price to performance and availability take a huge hit.

Speed vs Size shouldn't be the decision.
Think HDDs. You could afford either a fast 1TB or a slow 2TB HDD. Which one do you get?
If you need less than 1TB you get the fast one because it's faster, if you need more than 1TB you simply have to get the 2TB one, speed doesn't help you when you run out of space.

Again, the average user should do fine with 8GB RAM. If you needed more you would know it. If you're on a tight budget you can go for 4GB and add another 4GB later when you run into problems or can easily afford it.

#16
You know that RAM latency is measured in nanoseconds.
That's a million times less than the 1ms a 1000Hz mouse gives you.

Buying 1866MHz memory (you didn't specify voltage and timings btw) to run it at 1600MHz is retarded. If you want 1600MHz CL7 then buy RAM that is binned for 1600MHz CL7. It is guaranteed to run at 1600MHz CL7 1.5V. If you don't have to run it <=1.5V then just get 2133MHz CL9 1.65V. Same price as 1866MHz CL9 atm and both higher bandwidth and lower latency than 1866MHz CL8 or 1600MHz CL7.

posted about 9 years ago
#446 PC Build Thread in Hardware

I can walk you through delidding if you want me to. There's also the possibility that Intel might replace delidded CPUs, but I wouldn't bet on it.

PH-TC14PE vs NH-D14 depends on testing methodology, they're that close. Usually the price is in favour of the Phanteks, but in your case it isn't. I never understood why people care about cooler colours if their case doesn't have a window, we seem to be on the same page on that one.

Without mail-in-rebates other mainboards would've been cheaper.
Sorry about the RAM. You listed a CL10 kit though, is that a mistake?

MSI 4G and Gigabyte G1 are my recommendations.

posted about 9 years ago
#444 PC Build Thread in Hardware

i5s are pretty much out of question, streaming just begs for hyperthreading.

1. NH-D14 actually makes sense in Canada, the PH-TC14PE is more expensive (which is unusual) and the NH-D15 is way more expensive (not that unusual). Overclocking Haswell isn't easy though, are you willing to delid the CPU?

2. My brain has failed me. I kept looking back up at your post while typing and "CAD" triggered full server mode after I already saw "streaming" and "rendering".
16GB could help with rendering, bandwidth/speed isn't really a concern unless you're doing some really intense stuff, I'm more worried about latency. Dual channel 1600MHz CL9 should be good enough, it's only 1 or 2$ more and a fairly big jump from 1333MHz CL9, anything above 1600MHz is only an option on Z87/Z97 anyway so let's figure that out first.

3. In other words you don't need anything special. For the 4790K you definitely want Z97 for overclocking which has all the features anyway. With another CPU you could get away with a cheaper chipset but I wouldn't recommend it. The mystical power delivery stuff that made some motherboards better for overclocking than others became irrelevant since Haswell has the voltage regulator on the CPU package. As long as the motherboard can provide enough power it'll overclock all the same.
PS/2 is actually worth mentioning since it's not a given anymore. I'm guessing you just need one for the keyboard?
Motherboard onboard sound is ok, but I wouldn't really invest in anything high-end. The ALC1150 is good enough anything better than that and I'd be more worried about interference from all the digital signals on the motherboard and all the fields. If you need an amp you might aswell get a soundcard (though even a well shielded won't like the magnetic field that you get when the CPU goes from almost 0A to 100A in a millisecond) or an external DAC+amp so you don't have to worry about anything.

4. 3TB got the best price to performance/size ratio so I'd go with that.
256GB should be a bit faster and got a far better price to size ratio aswell. Midrange/performance SSD should be perfect, you don't need the absolute best performance but you're not hard pressed on the budget either.

5. The 970 actually makes sense if you'll play "GTA and stuff". Just keep in mind that much of the 970's appeal comes from its overclockability. While the 290s strengths lie at higher resolutions so the 970 got more of an advantage at 1080p they are a lot close than you might think. Even overclocked the 290 might have a better price to performance ratio depending on the game, settings and resolution. I'd still get the 970 for 1080p though, if you're going to overclock it.

CPU is interesting, the 4790K actually makes sense even when you're not overclocking, simply because of the 4.0GHz base clock.
Price to performance would be:
No OC, cheap mobo > OC + delidding, cheap mobo > OC, cheap mobo > OC, expensive mobo > no OC, expensive mobo
An expensive mobo only makes sense if you actually need the features. Since you care about CPU performance the most (and no, I don't mind american expressions, would be hard to survive in a forum full of americans otherwise), I'd recommend OC + delidding. I mean you get to whack a CPU with a hammer, that's fun! The only thing left to decide is wether or not you might want/need SLI in the future. You'd already be getting a Z97 mobo so it'd only be about 30$ more, but that's still 30$ you should only spend if you actually need it.

Without SLI:
PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i7-4790K 4.0GHz Quad-Core Processor ($416.98 @ DirectCanada)
CPU Cooler: Noctua NH-D14 65.0 CFM CPU Cooler ($89.95 @ Vuugo)
Thermal Compound: Coollaboratory Liquid Ultra 0.15g Thermal Paste ($19.23 @ Amazon Canada)
Motherboard: ASRock Z97 Anniversary ATX LGA1150 Motherboard ($94.95 @ Vuugo)
Memory: Patriot Viper 3 16GB (2 x 8GB) DDR3-1866 Memory ($129.88 @ Canada Computers)
Storage: Crucial BX100 250GB 2.5" Solid State Drive ($119.99 @ NCIX)
Storage: Seagate Barracuda 3TB 3.5" 7200RPM Internal Hard Drive ($114.98 @ DirectCanada)
Video Card: MSI GeForce GTX 970 4GB Twin Frozr V Video Card ($419.95 @ Vuugo)
Case: Fractal Design Define R5 (Titanium) ATX Mid Tower Case ($129.99 @ NCIX)
Power Supply: EVGA 500W 80+ Bronze Certified ATX Power Supply ($44.99 @ NCIX)
Total: $1580.89
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-03-29 09:10 EDT-0400

With SLI (better onboard audio aswell) and better PSU.
PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i7-4790K 4.0GHz Quad-Core Processor ($416.98 @ DirectCanada)
CPU Cooler: Noctua NH-D14 65.0 CFM CPU Cooler ($89.95 @ Vuugo)
Thermal Compound: Coollaboratory Liquid Ultra 0.15g Thermal Paste ($19.23 @ Amazon Canada)
Motherboard: Gigabyte GA-Z97X-SLI ATX LGA1150 Motherboard ($124.99 @ NCIX)
Memory: Patriot Viper 3 16GB (2 x 8GB) DDR3-1866 Memory ($129.88 @ Canada Computers)
Storage: Crucial BX100 250GB 2.5" Solid State Drive ($119.99 @ NCIX)
Storage: Seagate Barracuda 3TB 3.5" 7200RPM Internal Hard Drive ($114.98 @ DirectCanada)
Video Card: MSI GeForce GTX 970 4GB Twin Frozr V Video Card ($419.95 @ Vuugo)
Case: Fractal Design Define R5 (Titanium) ATX Mid Tower Case ($129.99 @ NCIX)
Power Supply: EVGA SuperNOVA NEX 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($74.99 @ NCIX)
Total: $1640.93
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-03-29 09:14 EDT-0400

The Gigabyte G1 970 was on sale on NCIX for 410$ until yesterday, keep an eye out for that.

I'll see if I can find a case that you might like.

PSU: bare necessities CX430/500B
A step up from that, fully modular and 80+ PLUS Gold I'd recommend the G1 650W atm.

posted about 9 years ago
#30 120fps on 60hz Monitor in TF2 General Discussion

And how do you know it's lower latency? And no "I can clearly see it" doesn't count. Your brain is way too biased to be objective.

Corruption is the best excuse I've ever heard for a spelling error. Stuff like that really doesn't make you look good in a paper.

cap = refresh rate therefore
1/cap = 1/refresh rate.
So if a render takes 1-y and render 0 started at 0+x, the next render won't start until 1+x and won't finish until 2+x-y. If x > y that's after refresh 2.
Up until the tearing you get the higher input lag. Now tell me that having part of the frame with higher input lag and part of it with lower input lag and having screen tearing is good and feels smooth and keep a straight face while saying it.

But let's get to the best part shall we? There might not be any tearing. Someone might be using a low resolution and in that case the transfer might be finished before the new frame finishes rendering. No tearing and you get the higher input lag for the full frame. Worst case y approaches 0, transfer time = x, input lag = 2-x.

See what I mean? If you don't have the means to verify the result using something that might help or might have the opposite effect is stupid.

posted about 9 years ago
#28 120fps on 60hz Monitor in TF2 General Discussion

Still can't verify wether or not it helped and that's a real problem.

I don't care about english ethymology, but tell me, where do you think "addemendum" comes from? I'm 90% sure that you fucked up the already wrong spelling "addemdum", not that somehow "addemendere" which for some reason seems to mean the same as addere was added to latin while I wasn't looking.

I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.

Uncapped the input lag is lower, a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop. Example: 2 frames, one rendered with VSYNC on, one with VSYNC off but the fps capped to 60. Both Get displayed on refresh 2, now when did they start rendering? The VSYNC one started at the time of refresh 1, the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0. The closer the render time is to the refresh time the more likely the worst case becomes, in fact it will happen at some point. And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2. Now you've got more input lag than with VSYNC until an fps drop occurs, because refreshes and rendering are locked out of phase with the same period.

This is not an argument for normal VSYNC, this is an argument against capping at the refresh rate, not related to our argument, that's why I put it under "bonus".
What I was trying to say there was: normal VSYNC = bad, capping at refresh rate = worse than VSYNC, therefore capping at refresh rate = really bad.

Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.

posted about 9 years ago
#25 120fps on 60hz Monitor in TF2 General Discussion

transfer time < refresh time, per definition, otherwise you wouldn't be able to keep the refresh rate up.

I'll stop arguing about shitty monitors and shitty cables but consider this:
Either the input lag is reduced slightly, but the image quality suffers or both get worse. And most people have no way of predicting or knowing which will be the case for them. But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.
If you're only concerned about people with "*very good monitors*" but bad cables you could just recommend the following:
What standard are the monitors inputs rated for?
What standard is the cable rated for?
If the cable is rated for less bandwidth then get a new one for 3$. I mean they could afford a "*very good monitor*" (that surprisingly didn't come with a cable) they should be able to save up for 3 months and buy a cable for 3$.

Fixed queue doesn't mean triple buffered vsync doesn't exist. Fixed swapping order doesn't mean you can't skip frames. There's copy, you know.

Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now. If you want to make up your own language go ahead, but don't expect anyone to understand it.

With triple buffering refreshes and rendering are independent, that's the whole point of it. Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.

Nice argument from authority.

Didn't ignore that, the gaps are cosmetic.

I was to lazy to write DirectX triple buffered VSYNC. In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.

And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore. DX11 is over 5 years old and has perfectly normal triple buffered VSYNC.

posted about 9 years ago
#442 PC Build Thread in Hardware

Nice, this is going to be fun. No budget constraints. Finally another type II build.

I believe there's only two ways to get proper build:
Type I: Set a budget and then optimize for performance
Type II: Set a performance goal and then optimize for price
Both with a bit of wiggle room to optimize for price to performance ratio.

Type II is more fun but also more difficult to not go overboard.

You need to know exactly what you want/need.

Are you going to overclock, is it a must, a bonus or just for performance.
Would ECC RAM be beneficial?
What features should/must the motherboard have?
SSD/HDD size?
How much CPU/GPU do you actually need? How much do you want? 970 is overkill just for CS:GO on low settings. 4790K + OC mobo + cooler wastes a lot of money if you need neither the CPU power nor the mobo's additional features compared to a cheaper one.

Dual socket an option?

I died a little inside when I saw the RAM, SSD and PSU.

Well it's the time of simplistic, rectangular, black box cases, certainly not everyone's cup of tea.
Anything specific you're looking for in a case? I might be able to dig something up.

You're right about the Lian Li PC-TU200B, there's simply no exhaust if you don't include the GPU in a custom loop or mod either the PSU mount or the PSU itself.

posted about 9 years ago
#22 120fps on 60hz Monitor in TF2 General Discussion

E.g. cap at 2 times the refresh rate, fps just below 3 times the refresh rate, frame starts rendering in sync with the refresh. Normal renders 2 frames until the next refresh, the 2nd starting a bit less than 2/3 of the refresh time before the next one. The third frame barely doesn't get rendered in time.
DX renders 2 frames aswell, but the second only starts rendering 1/2 after the previous/before the next refresh because of the cap. Almost 1/6 refresh time less input lag, PRAISE SCIENCE!

#20

Setsul... So whenever you don't have to cater to wonky engines with the fps cap ...

TF2 is safely in the wonky engine category, sorry if I didn't make that clear.

posted about 9 years ago
#21 120fps on 60hz Monitor in TF2 General Discussion

Transfer time is irrelevant unless your monitor has horrible input lag in that case you could "improve" your situation and go from horrible input lag to less but still horrible input lag plus horrible picture quality.
Progressive scan -> doesn't matter, limited by refresh rate
Buffered -> two options:
1. bad monitor in terms of latency (most likely since it's using a buffer so latency clearly isn't a concern) -> transmit at default speed, transfer time is set by refresh rate (identical to progressive scan, except you got one frametime of lag added)
2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.

Progressive -> transfer time is irrelevant.
1. -> transfer time is irrelevant
2. -> transfer time in theory relevant. In practice only very bad monitors do that so you'll probably get another 30ms response time on top of that so you're already fucked in terms of input lag.

-> You want progressive scan and in that case transfer time is set by the refresh rate, no cable or transfer protocol is going to change that, best it could do is reduce the overhead time. In addition to that it'll only work if the monitor's inputs are capable of higher speeds than what the monitor actually needs.

I'll show you with my awesome MS Paint skills a bit further down.

DX9 does support triple buffering. It can't drop frames, that's the problem. And it's only a problem with much higher fps than the refresh rate so the added input lag is a fraction of the refresh rate. The pre-rendered frames queue adds more input lag, reducing its length helps more.
Also let's keep in mind that DX9 is over 12 years old. That's not exactly state of the art anymore.
http://www.gamedev.net/topic/649174-why-directx-doesnt-implement-triple-buffering/
http://gamedev.stackexchange.com/questions/58481/does-directx-implement-triple-buffering

wareyaIf you send it an extremely small frame (such as 800x600 16kcolors) then it doesn't need to wait for the entire panel worth of data to be transmitted before it has data with which it can write to the bottom row.

1. 15bpp is 32k and 16bpp is 64k.
2. Your eyes are pretty much the worst measurement tool except for random guessing.
3. You've mixed up two cases.
If the scaler waits for a complete frame to be buffered transfer time matters, but it can't start immediately.
If the scaler starts immediately transfer time doesn't matter.

The bottom row is the last one to be transmitted. Obviously it has to be transmitted before it can be displayed. Per definition if the last row has been transmitted the whole frame has been transmitted. Also this is exactly what I talked about in the best case. The scaler can start working with the first transmitted line so the only the scaler latency is added. Given sufficient bandwidth the scaler could finish the frame only slightly after the transmission has been completed. You're right in that respect. However this isn't the limitation. If the display is running at max refresh rate going from the top to the bottom row during a refresh takes pretty exactly 1/[refresh rate] seconds. Example progressive scan (ignoring response time): 1ms scaler latency, 1920x1080 16.67ms transfer time, 800x600 3.8ms transfer time, 16.67ms refresh time.
No scaler: frame transmission starts at 0ms, refresh starts at 0ms. Transmission ends at 16.67ms, refresh ends at 16.67ms. Next refresh from 16.67ms to 33.33ms and so on.
Scaler: Frame transmission starts at 0ms, scaler starts at 0ms. Scaler finishes first line at 1ms, refresh starts at 1ms. Transmission ends at 3.8ms. Scaler finishes last line at 4.8ms. Next transmission starts at 16.67ms, scaler starts at 16.67ms. Refresh ends at 17.67ms, scaler finishes first line at 17.67ms. New refresh starts at 17.67ms.
See the problem? Refresh time is the limit. All it did was add the scaler latency.
Now an example with a buffered scaler where transfer time matters (T=transmission, S=scaler, R=refresh):
T starts at 0ms. T ends at 3.8ms. S starts at 3.8ms. S finishes first line at 4.8ms, R starts at 4.8ms. S finishes last line somewhere between 4.8ms and 8.6ms. New T starts at 16.67ms. T ends at 20.47ms. S starts at 20.47ms. S finishes first line at 21.47ms, R ends at 21.47ms, new R starts at 21.47ms.
Now it's scaler latency + transfer time. This is only an improvement if scaler latency + small res transfer time is less than large res transfer time and if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution. Is it a TV?

Bottom line is that a scaler can't save time. It can't speed up the refresh itself. Even if the refresh only takes e.g. 13.33ms (75Hz) physically on the screen so transfer time is the limiting factor lowering the resolution only improves things marginally. Yes the average display lag is reduced by 1.167ms but there will still only be one frame transmitted every 16.67ms. If the next frame has to wait 5ms for it's transmission then that won't change at all. What you want to do in that case is overclock that monitor and that's when transfer time actually matters. If the cable/output/input/protocol/whatever don't have sufficient bandwidth to keep the transfer time below the refresh time you have to drop the resolution. Because even with the 1ms scaler latency for example for that 5ms delayed frame you gain another 3.33ms (oc to 75Hz) or a total of 4.5ms reduction.

[talks about triple buffered vsync]
[posts picture about normal vsync]

wareyaAddemendum

Did you mean addendum?

http://i.imgur.com/K8mTUek.png

Black: Sampled input with constant vsync
Light Blue: vsync delay + rendering
Dark Green: waiting for display + actual refresh
Yellow: Random input delay (earlier inputs more delay, later less)
Red: Constant delay through rendering + waiting.
Light Green: Random refresh delay (top line appears first, no delay, bottom line last, full delay)
Brown: Same frame displayed twice.
Dark Blue: Rendering

You see the first missed refresh? That one is unavoidable. You'll have to live with one added refresh time of input lag, even with uncapped fps.
The problem is the second one. Because the next frame doesn't start rendering until the refresh starts you get the input over two full refreshes instead of around one and all of that data is delayed by another refresh if the frame doesn't render in time. And those refreshes worth of lag keep going for as long as the fps are below the refresh rate. Triple buffering avoids that because the next frame already starts rendering while the previous one is waiting (pink) because it missed the refresh. That reduces the added input lag to the normal one refresh time that you get when the fps are below the refresh rate.
And that holds true for DX triple buffering aswell. The only difference occurs when the fps spike to more than twice the refresh rate. DX won't render a third frame until the last refresh finishes and that adds some input lag. However it might actually feel smoother because on DX the average input lag, while higher, stays pretty much the same, whereas on normal triple buffering it drops in the next frame, then spikes up again and then drops again and so on. Well in the picture it doesn't drop because the fps drop pretty harshly, instead it increases less, on more stable fps it would drop, but you get the idea, the variation increases.
You can both get DX to behave more normal (reduce input lag) and get normal triple buffering to behave more like DX (smooth input lag) by using an fps cap. In fact thanks to statistics DX can actually have less input lag than normal triple buffering.

Continued in next post.

posted about 9 years ago
#18 120fps on 60hz Monitor in TF2 General Discussion

I forgot to mention, transfer time is irrelevant, other than limiting the refresh rate. Any low input lag monitor should be using progressive scan as in the first horizontal line is transmitted and immediately displayed (well the pixels still take a few milliseconds to change, but they start changing at that time). Right after that the next line will finish transmission and get displayed and so on. And after the bottom line starts refreshing the whole thing starts over at the top. No delay. So while the first line is only delayed by a bit of electronics (1ms-30ms, that's what they write on the box as response time) plus the time it takes for the complete transmission of the line (15µs, yes microseconds, less than 1/1080 of the 16.67ms (60Hz refresh rate) it takes to transmit all 1080 lines) the last line will be delayed by the full 16.67ms plus the response time. That's simply how monitors work.

Or to be precise that's how it ideally works. For example a 30Hz interlaced video signal would get delayed by at least 33.33ms(first line)/50ms(last line) + x if not 66.67ms/83.33ms + x because 2 frames have to be buffered to deinterlace the signal (1 complete 30Hz frame transmission + response time + a bit of calculation time if it starts to calculate the new frame once the first line of the second frame starts transmitting, otherwise 2 full 30Hz frames delay).
Another good example is scaling. Yes, lower, non-native resolutions actually add latency, not reduce it. Best case the scaler starts to calculate once a couple of lines have been transmitted so you just get the few milliseconds (or even less if it's a good hardware scaler) it takes the scaler to calculate the new lines. The screen can't refresh any faster so the last line is still delayed by the 16.67ms it takes for the refresh cycle to get there plus the added latency of the scaler. Average case: Instead of having to wait for new lines every couple of lines the scaler buffers a full frame and then starts to work. So you get the scaler latency and the e.g. 4ms for 800x600 it takes to transmit one frame on top of the usual stuff. Worst case same thing as average except the frames get transmitted with just enough bandwidth to get 60Hz so every frame takes 16.67ms + scaler + the usual stuff. So you went from response time for the first line and response time + 16.67ms for the last line to response + scaler + 16.67ms for first and response + scaler + 33.33ms for last. Congratulations, you just more than doubled your input lag.

tl;dr
Don't use non-native resolutions on LCDs.
It only reduces input lag on CRTs and only if you can increase the refresh rate because of the lower resolution.

Bonus: The GPU can't just randomly start to send frames to the monitor, it needs to be synced with the monitor's refreshes. Games don't give a fuck about the monitors so the only way the GPU can solve the problem is by waiting until the monitor requests a frame. And if that happens 0.01ms before a new frame starts drawing then so be it. In that case the monitor will get a 16.66ms old frame. So another completely random 0-16.66ms lag (60fps). It's different every time the monitor is turned on or is plugged in. Even different outputs/monitors on the same GPU will have different delays. That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay. Triple buffered VSYNC also eliminates the lag from the fps cap. So whenever you don't have to cater to wonky engines with the fps cap and if it's available, provided you have sufficient VRAM, use triple buffered VSYNC.

posted about 9 years ago
#11 new computer (not building) in Hardware

alternate.de charges 100€.
hardwareversand.de charges 30€ (frequently promotions for 5/10/15/20€, I think I still got a code lying around). And the prices are lower aswell.

posted about 9 years ago
#15 120fps on 60hz Monitor in TF2 General Discussion

#14
Correction: It's 1920x1200p 60Hz for HDMI 1.0-1.2. 1600x1200p is less than 1920x1080p. 1.3 is 2560x1600p 60Hz. 1.4 is the same bandwidth as 1.3 but adds 3D (1920x1080p 24Hz) and 4K (3840x2160p 30Hz, 4096x2160p 24Hz). 2.0 doubles the bandwidth again (4K 60Hz, 4K 3D 30/24Hz, 1080p 3D 60Hz).

Or in terms of transfer time for 1920x1080p:
1.0-1.2: ~16ms
1.3-1.4: ~8ms
2.0: ~4ms

posted about 9 years ago
#7 120fps on 60hz Monitor in TF2 General Discussion

Depends on the game. In single player if you get consistently fps > refresh rate it's an easy choice. no tearing > less input lag -> vsync, less input lag > no tearing -> uncapped, no vsync.

In multiplayer however you have to keep the networking in mind. Some games do interesting stuff on certain framerates. Quake comes to mind. The physics engine used to run at 125 tickrate and iirc even quake live servers still run at 125/250 tickrate so anything that didn't sync up with the 8ms/4ms tickrate would cause interesting stuff to happen, e.g. firerates and jump height changing. Same thing for TF2, 60Hz/16.667 and 66.67 tickrate/15ms obviously don't gel well.

tl;dr
TF2 is smoother at higher fps.

posted about 9 years ago
1 ⋅⋅ 191 192 193 194 195 196 197 ⋅⋅ 229