Setsul
Account Details
SteamID64 76561198042353207
SteamID3 [U:1:82087479]
SteamID32 STEAM_0:1:41043739
Country Germany
Signed Up December 16, 2012
Last Posted April 6, 2024 at 11:19 AM
Posts 3424 (0.8 per day)
Game Settings
In-game Sensitivity
Windows Sensitivity
Raw Input  
DPI
 
Resolution
 
Refresh Rate
 
Hardware Peripherals
Mouse  
Keyboard  
Mousepad  
Headphones  
Monitor  
1 2 3 4 5 6 7 ⋅⋅ 229
#6 AM4 CPU recommendations in Hardware
ProcreativeLooking to upgrade my CPU. Mostly just for TF2 and other old badly optimized games. Currently got a Ryzen 5 3600. Idk how much single thread performance matters vs. overall benchmarks. Assistance/recommendations for the same socket would be appreciated.

Do you have a budget?
For games, the fastest you can get would be the 5800X3D.
Though as always, we can't rule out TF2 being weird until someone tests it.
Otherwise, and as a cheaper option, my recommendation would be the 5600X. I doubt the extra two cores of the 5800X (not X3D) would do anything, and the extra 0.1 GHz are really not worth how much more it costs.

posted 11 months ago
#751 TF2 benchmarks in TF2 General Discussion
crespiSo you admit I was correct but your contention is that I should have said "please double check" that XMP is enabled even though I said:

It really sounds like you have no valid contention at all but have some sort of compulsion to be argumentative, condescending, and overly concerned with semantics and irrelavent technical details that make it appear like you know a lot more than others. In the more than 10 years I've been an IT professional, I've dealt with many people like you. It's interesting how far you're willing to bend the truth and ignore context just to try and make a stranger feel dumb.

Nope, you're just failing at reading comprehension now or just refuse to accept that you could ever be wrong about anything.
You guessed correctly that XMP wasn't enabled. It is a good first guess without much to go on.
The screenshot shows that the frequency was what it should be, but the timings weren't.
You were, incorrectly, worried about the frequency, and completely missed the timings.
If you had not mentioned the frequency, no one would've noticed, because XMP was indeed not enabled.
Si tacuisses, philosophus mansisses, pretty much what brimstone recommended.

You are being downfragged because listening to your advice would've made things worse. Task Manager would've shown that the RAM was running at 3200 "MHz", but XMP still wasn't enabled. Task Manager won't show you that, because it doesn't show timings. That's why the CPU-Z screenshot is more useful. Everyone involved already knew that.
Why are you expecting gratitude for explaining a way to completely miss the problem?

Yes, I will judge you for giving harmful advice, and yes, I do think I know more than you. For example what DRAM timings are. I know this upsets you, and if it causes you to stop polluting the thread with your bruised ego, then I will have reached my goal.
That's it from me.

posted about a year ago
#747 TF2 benchmarks in TF2 General Discussion
crespiBrimstonethe fact that you lacked the technical experience/knowledge to even correctly interpret one of the most common applications in existence speaks volumes to your credibility. if you don't actually know how to help please understand the move that will make you look the smartest is to not post anything to begin with
The fact that you think you can judge someones technical experience/knowledge from such a petty, intentionally uncharitable interpretation of one thing they said, speaks volumes to your character as a human being, and your lack of credibility when it comes to soft skills. Enjoy your anonymity on this online forum but no decent person would talk like this to someone without justification, and being wrong about something isn't justification for treating people this way.

Consider:
You didn't now the difference between MT/s and MHz and thought that Iso's RAM was running at the absolute minimum legal speed for DDR4, with even worse timings than JEDEC standard. This can't happen automatically, so it is you uncharitably interpreting it as Iso having manually crippled his RAM.

BeaVerN knew, or at least suspected, that 20-20-20-40 are the JEDEC timings for DDR4-3200W. Turns out he was right. Assuming Iso linked the correct kit, the XMP profile should be using 16-18-18-38.

So you guessed correctly, XMP was not enabled, but then you completely dropped the ball due to your lack of technical experience/knowledge. If you had just said "Looks like XMP isn't enabled, please double check" no one would've even noticed.
You were then presented with two choices: Learning from this mistake and expanding your knowledge, or whining about getting downfragged for being wrong about something outside of your expertise.

You chose the salt.

posted about a year ago
#742 TF2 benchmarks in TF2 General Discussion
crespiIsoCrespi, nobody cares. Just take the -3 frags (it was only -3 at the time) and move on.
It was -6 the entire time now it's -7. I wasn't even wrong, was just trying to help someone on my own free time just to be nice and people here can't help but be toxic. So I don't feel like helping anymore. Good luck.

No, you were wrong. Just because you wrote "it looks like" doesn't mean you were any less wrong about it.

Even well-meaning misinformation will and should be downfragged, because it wastes people's time if taken seriously. Do not take that personally.

It seems like out of all the posters involved, you had the least knowledge to contribute and then decided to obsess over getting downfragged for it instead of actually trying to help anyone.
Learn from this that CPU-Z is correct, that MHz are not the same as MT/s even though DRAM marketing likes to pretend they are, so you can get it right next time, and move on.

posted about a year ago
#3907 PC Build Thread in Hardware

Well, these days you usually won't get much of an overclock, though it's still possible.
The Intel chips tend to be pretty maxed out and higher all core clocks run into power/temperature issues fairly quickly, but there can be a bit of room for single cores, especially because the standard turbo settings sometimes don't allow the max single core boost on any core, just a select few, and a bit more voltage can help with that.
For Ryzen, the 105W and especially 65W chips obviously have quite a bit of room, though again you'll likely run into temperature issues before voltage becomes a problem, let alone power draw if you've got a mobo that could handle the 170W SKUs.

But generally yes, because temperature is the main issue, the trend is to simply undervolt to keep those CPUs a bit cooler, and let the standard boost mechanisms take care of the rest. The end result is effectively an overclock anyway, because now they can boost higher and/or more often before running into the temperature, power, or voltage limits.

Also, it'll be either a 7700X3D or a 7800X3D. Just one 8 core SKU, as far as I know, with the name depending on the price (and performance segment) they're going for.

If you do get a Z690 mobo, make sure it's one that can have its BIOS updated without a CPU, or you're going to run into some troubles.

posted about a year ago
#7 4 suggestions for RGL in TF2 General Discussion

This is disappointing even for an RGL Thread.

posted about a year ago
#3905 PC Build Thread in Hardware

I mean you've said it yourself? The 7700X is a bit cheaper and more efficient (not as dependent on boost too, 4.5 vs 3.4 base), maybe a bit faster in source engine games (haven't seen it personally though), in everything else the 13700K probably wins.
I'm not sure how much that would change depending on if/how you overclock, but you'll need to decide on that to pick the right mobo.

I should add that the price drop on the 7700X is likely permanent, due to the existence of the 13700K, even though AMD likes to pretend it's a special offer for the holidays.

And another thing: There should be a 7700 (65W non-X) and a 7700X3D or 7800X3D (170W with 3D V-Cache) announced at CES in January. The former would fill the cheaper, more efficient niche better than the 7700X at the cost of being a bit slower or could be pushed to similar performance with a bit of overclocking at the cost of higher power consumption, while remaining cheaper, and the latter should be faster at the cost of being more expensive and more power hungry.
If you're lucky it's a 7700X3D and somewhere between the discounted and launch price of the 7700X (350-400$), if you're unlucky it's a 7800X3D and closer to the 7900X (450-500$).

You could of course make it weird and gamble a bit, buy a 7700X (or even 7600X) now if you find a got deal for a mobo, then upgrade when the 7700X3D is actually released (might be later than the announcement). Could go wrong, could be a 7800X3D that's too expensive for your taste, or you lose too much selling the CPU you initially bought, but it is an option.

posted about a year ago
#3902 PC Build Thread in Hardware
LeonhardBrolerAlright thanks a lot for the PSU input Setsul, makes sense. Do you have an opinion regarding the monitors?

I'm not up to date on current models and don't really have the time to go down that rabbit hole right now.

posted about a year ago
#3899 PC Build Thread in Hardware
LeonhardBrolerbe quiet's calculator for instance estimates the max power draw at 615 with all those details, making 80+ Gold 650W PSUs workable but relatively tight. Should I play safe and get 700/750W or is it overkill?

Why wouldn't a 650W PSU work? If a 650W PSU can't handle 615W load, it's not a 650W PSU.

Overcurrent protection is usually set to trip at around 120% rated load, so it's not like that would mess things up. If the 3070 somehow gets an absolutely terrifying spike to >600W so that it and 150W from the 5800X maybe just barely exceed that, then you really want the PSU to shut down.

For that matter, 20ms spikes on the 3070 seem to be around 300W, and the 5800X should be limited to 142W even when boosting. Unless you manually increase that, you could get away with a 500W PSU if it's a good one.

posted about a year ago
#24 etf2l bs in TF2 General Discussion
batemansetsulbros... how are we gonna spin this one back at rgl...?

It took ETF2L less time to fix the issue than for me to respond.
Take that however you will.

posted about a year ago
#3894 PC Build Thread in Hardware

#3890
No, can't see anything wrong with it.
#3891 does have a point though, there should be cheaper Z690 boards, and cheaper B650(E) boards for that matter. PCPartpicker probably does not have a lot of slovakian shops. I mean every part in that list shows only alza, which probably wasn't even added specifically for slovakia, they just added it to a dozen countries.

I dislike aios. https://www.teamfortress.tv/12714/pc-build-thread/?page=19#555
The Z63 specifically is a big no because you'd be paying 100 bucks for an LED display. The X63 is ok.
If you do care about the noise levels you could get an NH-U12A for less, if you don't you could slap 2000 or even 3000 rpm fans on an NH-D15 (or any dual tower really).
Honestly, for the price of a Z63 you can get a full watercooling kit that'll blow it out of the water, pun very much intended.
An NH-D15 should really be enough though.

B650E got PCIe 5.0. Do you want/need PCIe 5.0? Get B650E. Otherwise B650.

posted about a year ago
#3889 PC Build Thread in Hardware

Frankly, I've got no idea what you'd need on the highest settings, because uuuuh we don't do that here.
One easy test you could do though is checking the GPU usage. If it's at 100%, then you definitely need a faster one.

For the CPU, you're limited by the mainboard. The fastest you can get that fits, an i7-7700K, would be 35-40% faster. You'd have to find a used one, because they are not sold anymore.

Now, that means if you're getting less than 70 fps on those settings right now, that CPU wouldn't be enough. In that case you'd need a new mainboard as well, so you can get a newer CPU, and depending on what RAM you've got it might not fit either, then you'd need new RAM too. Worst case, the GPU also isn't fast enough, and then you essentially need a completely new pc.
I'm not sure if you've got the budget for that.

Anyway, check GPU usage, if it's significantly less than 100% you should only need a new CPU.
If you're getting more than 70 fps as it is, look for a 7700K, maybe 7600K. More than 80 fps and you can probably get away with a 7700 or 7600. That should get you just about 100 fps.
If you're getting less than 70 fps and it's not the GPU, or you simply want significantly more than 100 fps, then it needs to be a bigger upgrade.

posted about a year ago
#709 TF2 benchmarks in TF2 General Discussion
jnkiSetsulAre you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.
[...]
The funniest thing is: the best screen tech we have on hand today is 360hz, which we can drive with a 12/13600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.

This is just schizophrenic. "4K benchmarks above 120fps don't matter because we don't have the monitors"
"But the current fps counter says 900, ignore that the average says 500, this is really important"

jnkiSetsulWhy are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?I'm upset because you pull up with 4K results when and especially in SourceNo one cares.pull some numbers out of your ass and thenAre you going to see a noticeable difference between a 1080 and a 4090? No.when the card is three times faster

But your 4K high and 1440p high benchmarks I'm supposed to care about?

You started this whole bullshit with your theory that a 4090 would give more fps in TF2 at 1080p on low settings because of some secret sauce.
I said it's not going to make a difference.
I don't see how the 4090 being "3 times as fast" in completely different games on 4K is going to matter. The fps do not magically transfer.

jnki
  1. Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
  2. Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
  3. Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
  1. Then why the fuck do you keep digging up 1440p high and other benchmarks?
  2. Show me a single fucking benchmark that's actually using the same system and benchmark just with different GPUs instead of mixing completely different setups until the "prove" your theory based on shitty math
  3. Is it? Show me a single fucking benchmark.
jnkiSetsulNo, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.
The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that
  1. There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
  2. That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.

So there is a difference at 1440p on Very High Settings in CS:GO between a 3090 and a 4090. Why exactly does this mean there will be one in TF2 at 1080p on low?
Or are you trying to prove that the 4090 is significantly faster in general so it must always be significantly faster and the CPU limit is just a myth? What are you trying to show here?

And no, none of that is safe to assume.
Like I said, only the 3090 Ti and 3090 are apples to apples. If you extrapolate from those, the difference between the two should always be 39 fps no matter the resolution, which is obviously garbage.
If you extrapolate from the nonsensical 4090 4K results to "prove" that the difference between 4090 and 3090 (Ti) should be larger at 1080p then extrapolating in the other direction "proves" that at 8K the 3090 should actually be faster than a 4090, which is also complete garbage.

Garbage in, garbage out.

jnkiNow regarding to the mysteriously clocked 12900ks. It was more of an comparison to the 12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps
I was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.

It's fucking bogus math.

Let me do the exact same thing. You had a benchmark of a 12900KS, presumably with stock clocks, which can sometimes boost to 5.5 on at most two cores, otherwise 5.2, some RAM, DDR5 JEDEC RAM according to you but it could just as well be DDR4, and a 3060 Ti, on unknown settings that look pretty low, "competitive settings" supposedly, versus a benchmark of a 12900K at 5.5/4.3 constant, DDR5 6400 MHz, and a 3090, on "low".

We can look at the thread and compare a 13600KF + 1660 Ti with a 13600K + 1080 Ti.
5.4 GHz, DDR4 3600 MHz, "competitive settings" for the former, stock clocks, DDR5 5600 MHz and "low" on the latter.

kindredbenchmark_test (mercenarypark)
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability
cheetazBenchmark_test.dem
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability

Slightly higher CPU clocks but much lower RAM clocks, so I conclude that at least 35% of those extra 131 fps are from "generational improvements" of the 1660 Ti.

In conclusion, the 1660 Ti faster than a 1080 Ti because it's newer.
That's what you're doing.
Benchmarks on vastly different setups are fucking worthless for these comparisons.
Stop doing it.

posted about a year ago
#707 TF2 benchmarks in TF2 General Discussion
jnkiWhat the fuck is a Very High? CSGO doesnt have rendering distance sliders, hairworks, weather, NPC density, ray tracing etc. There is only one possible thing that can be set to "Very High" and it is the model quality. And let me tell you right now the difference that makes is 100% pure snake oil. All the reviewers and benchmarkers make up their own boogie way of saying Highest/Ultra Settings/Ultra High/Very High, so for all intents and purposes "High" in Source is essentially Ultra Settings.

Very High is literally what the LTT screenshot says. Not my fault you linked that.

jnkiSetsulHigher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.I disagree. If at GPU heavy titles many different CPUs level out their performance at 4K and half the time 1440p, while at 1080p show incremental differences, then why should a CPU heavy game care whatsoever about your resolution if no new objects are being drawn, only each frame being scaled up in the resolution by the GPU increasing its load. So from my standpoint at CPU heavy titles, differences in performance from GPUs come either from a class step-up/downs or architectural differences.

You can disagree with how games work, it's not going to change the facts.
And no, the frames aren't just scaled up. That would be DLSS.

jnkiI'm happy that you accept that comparison graciously and acknowledge the 39fps difference, but it is essentially two of the same cards, except the one clocks marginally higher and thats why they sell it overpriced with a Ti label slapped onto it. Shouldn't we also expect the same level of performance scaling down to 3080 and 3080Ti? What about 2080Ti that has been found to be slower than 3070Ti? What about 1080Ti. Lets revisit your initial claim E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.But then miraculously 3090 and 3090ti have 39fps between in the same generation, so what gives? According to that quote, the gap should have been 5fps, or as you said, going from 2080ti to 4090 will be 496->499 fps? Isn't it reasonable to deduct that the generational leap is much greater than that, especially starting 30 series, and now again at 40 series. Because 2080Ti was like intel in the past decade, feeding you 10% more at twice the price. 30 series were a reasonable performance bump at a cost of heavily inflated power draw, but finally the new product actually was meaningfully faster. Now the same thing is happening with the 40 series, but the pricing is outrageous.

1. Yes, the numbers I pulled out of my ass aren't gospel. A 3090 gets different fps in CS:GO at 4K than what I predicted for a 1080 in TF2 at 1080p on the benchmark demo. What is your point?
2. Nice rant about nVidia. What is your point?
3. You accept that the whole benchmark is fucked and not useable, but want to extrapolate from that what is means for a different game, at different settings, with different GPUs just because "that's how it should be" aka "things should be how I want them to be". What is your point?

jnkiSetsulThen you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.Well, yes. That's the point in the first place, all of these show that at 1080p the performance is vastly different from 4K, and that settings play albeit a smaller role as well. Your initial post fed me 4K High results bordering 480fps. Why are you explaining to me the gains are negligible while showing 4K High results when:
  1. The generational improvements can barely be traced if at all
  2. The ports on the card cap out at 4K 120hz
  3. Who plays competitve at 4K High professionally or competitively in the first place
And the point is that at 1080p or even at 1440p the performance difference is pronounced, not in your 5fps increments.

Can you shut up about "generational improvements"? No one cares.
What does the refresh rate have to do with this? Are you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?
Why are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?
Yes, the difference between 1080p and 4K or low and high settings is more than 5fps on the same card. That is literally the opposite of what we're arguing about. Again, why bother linking all that? What is your point?

jnkiI was looking at the current framerate without looking at the average, forgot to mention that and was fully prepared to discard that benchmark alltogether because again, its not even uLLeticaL but a guy running around a bot filled dust2 so I'm gonna concede this.SetsulWhere could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.In regards to the lower average I think that the NVIDIA driver is at play once again. So allegedly 200mhz on the all core and faster and tighter RAM, yet the average is lower by a 100fps while current fps is 300higher? 1% and .1% lows driver shit is at play again.

Again, it's a different setup with a different, unspecified overclock, likely DDR4 vs DDR5, and the current fps counter is completely bogus. At one point it shows 3.9ms frametime and 900+ fps.

You can't argue that 4 out of 5 values are too low because the driver fucked up, but the 1 value that you want to be true is beyond reproach and that current fps should be used instead of average fps because it's the one true average.

jnkiSetsul12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.
https://www.youtube.com/watch?v=-jkgFwydrPA
Alright, 12900KS unspecified clock, allegedly turbos to 5.5, unspecified DDR5 RAM, lets pretend that its JEDEC 4800, and a 3060ti at 1080p Low.
So according to previously I consider agreed upon data, a 3090 and a 3090ti have about 39fps between them in sterile environment. Which should mean the difference the faster DDR5 kit, +200 allcore and E-Cores off provide is 52.75fps with the remaining 39fps taken out of the equation. Then how does a 3060ti score 136.67 fps below a 3090 with identically shitty JEDEC kit of RAM at the same resolution and graphical settings with the main or even the only difference being a stepdown in a GPU class?

dm me your discord or something we are clogging the thread with this nerd shit, understandable should you not choose to engage with me out of time practicality or own mental well being concerns

No, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.

You're arguing from your conclusion, that the fps are different because the GPU is different, while completely ignoring that CPU, RAM, and the benchmark itself are also different.
It's just pure garbage.

posted about a year ago
#705 TF2 benchmarks in TF2 General Discussion

You need to understand that a GPU does not conjure details out of thin air. Higher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.

So you complained about 4K High benchmarks and pulled out 4K and 1440p Very High benchmarks instead. Not impressed, I must say.

Let's go through the benchmarks in reverse order:
We're not going to play some mystery guessing game, so the LTT benchmarks are completely worthless.
As you can see, at 4K the 4090 driver is fucked, at 4K the 6950XT is the fastest and at 1440p it falls behind even the 3090. So the only apples to apples comparison are the 3090 Ti and 3090 and their difference is ... 39 fps, regardless of resolution and total fps. Logically the prediction would be that at 1080p Very Low we'd see something like 1000 fps vs 1039 fps. Not very impressive.

The 3080 vs 4090 is pretty scuffed because the average the 4090 shows 450-500fps average on 1080p low and 500-550 average on high, no idea how the current fps are so much higher than the average for the entire run, the 3080 shows 650-ish on low and 670-ish on high. So I'm not sure what're you trying to show.
Where could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.
No idea where you're getting 46% more fps from, but it's most definitely not an identical setup and not even the same benchmark.
Again, worthless.

Then you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.

Which brings us the only benchmark that could've been relevant.
12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.

posted about a year ago
1 2 3 4 5 6 7 ⋅⋅ 229