SetsulAre you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.
The funniest thing is: the best screen tech we have on hand today is 360hz, which we can drive with a 12/13600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.
This is just schizophrenic. "4K benchmarks above 120fps don't matter because we don't have the monitors"
"But the current fps counter says 900, ignore that the average says 500, this is really important"
SetsulWhy are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?I'm upset because you pull up with 4K results when and especially in Source
No one cares.pull some numbers out of your ass and then
Are you going to see a noticeable difference between a 1080 and a 4090? No.when the card is three times faster
But your 4K high and 1440p high benchmarks I'm supposed to care about?
You started this whole bullshit with your theory that a 4090 would give more fps in TF2 at 1080p on low settings because of some secret sauce.
I said it's not going to make a difference.
I don't see how the 4090 being "3 times as fast" in completely different games on 4K is going to matter. The fps do not magically transfer.
- Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
- Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
- Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
- Then why the fuck do you keep digging up 1440p high and other benchmarks?
- Show me a single fucking benchmark that's actually using the same system and benchmark just with different GPUs instead of mixing completely different setups until the "prove" your theory based on shitty math
- Is it? Show me a single fucking benchmark.
SetsulNo, we saw a 39 fps difference in a completely fucked up environment.The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.
- There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
- That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.
So there is a difference at 1440p on Very High Settings in CS:GO between a 3090 and a 4090. Why exactly does this mean there will be one in TF2 at 1080p on low?
Or are you trying to prove that the 4090 is significantly faster in general so it must always be significantly faster and the CPU limit is just a myth? What are you trying to show here?
And no, none of that is safe to assume.
Like I said, only the 3090 Ti and 3090 are apples to apples. If you extrapolate from those, the difference between the two should always be 39 fps no matter the resolution, which is obviously garbage.
If you extrapolate from the nonsensical 4090 4K results to "prove" that the difference between 4090 and 3090 (Ti) should be larger at 1080p then extrapolating in the other direction "proves" that at 8K the 3090 should actually be faster than a 4090, which is also complete garbage.
Garbage in, garbage out.
jnkiNow regarding to the mysteriously clocked 12900ks. It was more of an comparison to the
12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fpsI was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps
It's fucking bogus math.
Let me do the exact same thing. You had a benchmark of a 12900KS, presumably with stock clocks, which can sometimes boost to 5.5 on at most two cores, otherwise 5.2, some RAM, DDR5 JEDEC RAM according to you but it could just as well be DDR4, and a 3060 Ti, on unknown settings that look pretty low, "competitive settings" supposedly, versus a benchmark of a 12900K at 5.5/4.3 constant, DDR5 6400 MHz, and a 3090, on "low".
We can look at the thread and compare a 13600KF + 1660 Ti with a 13600K + 1080 Ti.
5.4 GHz, DDR4 3600 MHz, "competitive settings" for the former, stock clocks, DDR5 5600 MHz and "low" on the latter.
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability
Slightly higher CPU clocks but much lower RAM clocks, so I conclude that at least 35% of those extra 131 fps are from "generational improvements" of the 1660 Ti.
In conclusion, the 1660 Ti faster than a 1080 Ti because it's newer.
That's what you're doing.
Benchmarks on vastly different setups are fucking worthless for these comparisons.
Stop doing it.