Setsul
Account Details
SteamID64 76561198042353207
SteamID3 [U:1:82087479]
SteamID32 STEAM_0:1:41043739
Country Germany
Signed Up December 16, 2012
Last Posted April 26, 2024 at 5:56 AM
Posts 3425 (0.8 per day)
Game Settings
In-game Sensitivity
Windows Sensitivity
Raw Input  
DPI
 
Resolution
 
Refresh Rate
 
Hardware Peripherals
Mouse  
Keyboard  
Mousepad  
Headphones  
Monitor  
1 ⋅⋅ 155 156 157 158 159 160 161 ⋅⋅ 229
#311 TF2 benchmarks in TF2 General Discussion

#309
The latency wasn't really the same.
3866MHz: 19 = 9.83ns, 21 = 10.86ns
2133MHz: 11 = 10.31ns, 15 = 14.06ns

So the latency of 2133MHz CL11 is actually about the same as 3866MHz CL19.
The exception is the command rate.
Ah yes, TF2 doesn't disappoint. Not only do RAM timings have way more of an impact than the should on the low end, the one timing that usually matters the least has the most impact on the high end.

posted about 8 years ago
#308 TF2 benchmarks in TF2 General Discussion

DDR4 RAM timings? They affected fps significantly in my testing. I'm also pretty sure you're just seeing diminishing returns, going from 1333 to 2400MHz made quite a difference. 2133MHz RAM just isn't really holding the CPU back in the first place.

posted about 8 years ago
#1165 PC Build Thread in Hardware

#1165
EDIT:
Why did you remove the 760K? It was absolutely hilarious. Buying an APU with the graphics disabled and adding the exact same graphics as GPU.

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i3-4170 3.7GHz Dual-Core Processor ($114.78 @ OutletPC)
Motherboard: Gigabyte GA-B85M-DS3H-A Micro ATX LGA1150 Motherboard ($39.99 @ Newegg)
Memory: Mushkin ECO2 8GB (2 x 4GB) DDR3-1600 Memory ($32.99 @ Newegg)
Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($88.00 @ Amazon)
Storage: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($45.89 @ OutletPC)
Video Card: Sapphire Radeon R7 360 2GB NITRO Video Card ($88.98 @ Newegg)
Case: Fractal Design Core 1000 USB 3.0 MicroATX Mid Tower Case ($34.99 @ Micro Center)
Power Supply: EVGA 500W 80+ Bronze Certified ATX Power Supply ($22.98 @ Newegg)
Keyboard: Logitech K120 Wired Standard Keyboard ($11.99 @ SuperBiiz)
Total: $480.59
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-13 19:02 EDT-0400

Anyway, because TF2 is a steaming pile of shit 4 core won't do much, an i3 with higher clockrate is actually faster (and far cheaper).
You don't need an aftermarket CPU cooler. You can't overclock so why spend money on a better cooler. Sure it's quieter, but wait until you get the build and see if the stock cooler actually bothers you (the GPU fan(s) should be louder anyway).
You could actually get an SSD as well after some improvements to the build.
As mentioned before the 250 is integrated GPU level. A 360 isn't much more expensive and literally twice as fast.
The N200 is a possible alternative to the Core 1000.
Again, you changed the PSU in the edit. I was about to say that if in 2016 a PSU can't pass 80 Plus Bronze, the lowest 80 Plus standard released in 2008, it's not worth buying, no matter how cheap. Now you changed it to a bottom tier PSU for almost 60$. Either get a good PSU for 60$ or a bottom tier one for <30$. Right now only the EVGA 500B (cheapest) and Corsair CX430M (semi modular, but might limit you GPU upgrade options due to only having one PCIe 6/8 pin connector) are that cheap.
I really don't understand why you'd buy a shitty monitor for 120$. Either get a used one for 30 bucks if you decide to get an SSD or ditch the SSD and get a 144Hz monitor now.

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i3-4170 3.7GHz Dual-Core Processor ($114.78 @ OutletPC)
Motherboard: Gigabyte GA-B85M-DS3H-A Micro ATX LGA1150 Motherboard ($39.99 @ Newegg)
Memory: Mushkin ECO2 8GB (2 x 4GB) DDR3-1600 Memory ($32.99 @ Newegg)
Storage: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($45.89 @ OutletPC)
Video Card: Sapphire Radeon R7 360 2GB NITRO Video Card ($88.98 @ Newegg)
Case: Fractal Design Core 1000 USB 3.0 MicroATX Mid Tower Case ($34.99 @ Micro Center)
Power Supply: EVGA 500W 80+ Bronze Certified ATX Power Supply ($22.98 @ Newegg)
Monitor: BenQ XL2411Z 144Hz 24.0" Monitor ($269.00 @ Amazon)
Keyboard: Logitech K120 Wired Standard Keyboard ($11.99 @ SuperBiiz)
Total: $661.59
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-13 19:06 EDT-0400

You can save some money if you don't want to use lightboost at all or are willing to give up the extra functionality of the BenQ Motion Blur Reduction Utility by going with the AOC G2460PQU instead. I wouldn't recommend it though.

posted about 8 years ago
#137 Stabby talks class balance in TF2 General Discussion

I'm expecting stabby to call for a ban on four-wheeled one-engine cars because the car manufacturing meta is stale. Why not 5 wheels and 2 engines?

In all seriousness though, the reason why Scout and Soldier limit shouldn't be 1 is the same reason Demoman, Heavy, Engineer and Medic (in ETF2L Pyro and Sniper as well) limit is 1. It slows the game down.
Limit 1 for all would absolutely work, it would just be awful to play and awful to watch.
Spoiler alert for the Pyro and Spy mains: The second Scout and Soldier won't be replaced by Pyro and Spy. They'll be replaced by Heavy and Sniper. Every map is cp_gullywash now. glhf with pushing.

posted about 8 years ago
#1162 PC Build Thread in Hardware

Why not Skylake? There's not price difference and it should be faster.

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6500 3.2GHz Quad-Core Processor ($295.00 @ PCCaseGear)
Motherboard: Gigabyte GA-B150M-D3H Micro ATX LGA1151 Motherboard ($135.00 @ CPL Online)
Memory: G.Skill Ripjaws V Series 8GB (2 x 4GB) DDR4-2400 Memory ($61.60 @ Newegg Australia)
Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($129.00 @ CPL Online)
Storage: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($69.30 @ Newegg Australia)
Video Card: Asus Radeon R7 370 2GB Video Card ($215.00 @ PLE Computers)
Case: Cooler Master N200 MicroATX Mid Tower Case ($61.00)
Power Supply: Corsair CX 430W 80+ Bronze Certified Semi-Modular ATX Power Supply ($77.00 @ Umart)
Total: $1042.90
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-08 09:42 AEDT+1100

You could also get an i5-6600 instead if you want.
There are cheaper mobos, but I don't see the need to cut corners. Although even a GA-H110M-H would be far better than the H81M-E34.
Same for the SSD. You get an SSD because it's fast, so why go for a slower SSD just to save money. $/GB is far better for 250GB SSDs. Maybe don't get an HDD at all if 250GB is enough. Same for HDDs, best $/GB is at 3TB, so if you think you'll need that much it's better to get 2TB now instead of adding a second 1TB HDD later.
N200 as sort of a placeholder, there's other options.
I don't see the point of a 620W PSU for a 300W build. The 520W version is 20$ cheaper. But even then, there's 80+ Gold Seasonic PSUs at the same or lower prices.
Your cheapest option would be the CX430M, it's also semi-modular. If you want to upgrade later to a GPU that needs two PCIe connectors you either have to use a molex to PCIe adapter or if you're uncomfortable with that get a 500B. If you're sure you don't want to upgrade to a 2 connector GPU later and want a better PSU there's a G360 for 89$

posted about 8 years ago
#1156 PC Build Thread in Hardware

#1155
Not a fan of compromising on the mobo and RAM when it doesn't even get you a proper i5. It obviously depends on the game but most (especially TF2) are more dependent on clockrate than core count. There are some games that utilize 4 or more cores properly, but only if your GPU isn't a bottleneck. So I'd need to know your GPU and which games you're going to be playing, but for TF2 I can guarantee that an i3-6300 will actually get you more fps.

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i3-6300 3.8GHz Dual-Core Processor (£105.00 @ Amazon UK)
Motherboard: Gigabyte GA-B150M-D3H Micro ATX LGA1151 Motherboard (£71.26 @ Ebuyer)
Memory: Kingston FURY 8GB (2 x 4GB) DDR4-2400 Memory (£34.38 @ More Computers)
Total: £210.64
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-02 21:25 GMT+0000

#1156
Thank you.

posted about 8 years ago
#7 Performance issues - various games in Hardware

Just from the games you said you're having issues running.

posted about 8 years ago
#5 Performance issues - various games in Hardware

Try actually logging the temperatures, clock speeds and frametimes. MSI Afterburner can do all of that (no CPU clock speeds though).

posted about 8 years ago
#1153 PC Build Thread in Hardware

4. The GPU AMD showed (Polaris 10, the smaller chip) was built on Samsung/GF 14nm, which was ready before TSMC 16nm. So unless AMD was terribly slow nVidia absolutely can not have had a GPU made on TSMC 16nm ready before them.
5. For the middle chips (Polaris 11, GP104) there's a similar situation. AMD is aiming for a mid 2016 release, whereas some are claiming nVidia is going to release GPUs in April. Of course that's bullshit. The shipping manifests used to "prove" that nVidia had Pascal chips actually prove the opposite. January 2016 is the earliest they could've had that chip, unless they had it lying around and were waiting for the test equipment, which would be extremely embarrassing. Setting a release date so close to first silicon that you only have time for one spin is hugely optimistic at best. But when you didn't even test the first chip yet it's just a retarded gamble.
6. Then there's the rumour that the GP100 chip will be the first to be released. That's just crazy. First of all you don't start with a big chip on a new node. No one ever did. Not even Intel and they can do some crazy shit (a 600-700mm² each year which TSMC couldn't even build, compared to one big chip (500-600mm²) every 2 years by nVidia). Secondly you never, ever do the biggest chip of a new architecture first. To recap nVidia needs to add the HBM interface and DX12 stuff. But Pascal also introduces other features which should definitely be on at least the GP100 unless they want to do a seperate chip for the professional market, which would be extremely expensive for them. If you remember what I wrote about the Titan X then you know that nVidia needs a new chip with good double precision performance. So delaying the professional chip doesn't make sense, building two chips that are almost identical back to back doesn't either. I'd say it's safe to assume that there will be only one big chip for now so it'll have both NVLink and mixed precision. At this point you're basically looking at a new architecture, even if they don't change anything else. I said you never, ever do the biggest chip of a new architecture first. Well, I lied. nVidia did it once with Fermi. It was a disaster and took them 9 months longer than anticipated to deliver something that was essentially a GPU/furnace hybrid. Granted they did that right after they had yield problems with much smaller chips on 40nm so that could've been the main issue (and shows that they were neither making good designs nor smart decisions at the time), but either way when you're introducing a new architecture on a new node unless everyone in the company has suddenly gone off the deep end you do not start with a big chip.

What does this mean? Best case nVidia will release their cards a few months after AMD (which gives AMD some time to get their drivers up to scratch), worst case they pull another Fermi.

We've finally reached the end of our journey. To summarize:
-16/14nm will be awesome
-AMD look like they will release on schedule
-nVidia will release a few months later if all goes well, much later if they fuck up
-Pascal won't be noticably more efficient or faster than Polaris, neither will it have better price to performance ratio

tl;dr
-> Definitely wait for AMD Polaris (earlier release)
-> Wait for Pascal if you want to be sure or want price drops (competition always brings price drops) but don't expect anything game-changing.

posted about 8 years ago
#1152 PC Build Thread in Hardware

I don't like where this is headed, generally when you can't allow AIB partners to "improve" the reference design by something as simple as slapping twice the VRAM on it because it would make the next more expensive GPU obsolete something is clearly wrong. It also means that they can't ever release the full GM200 as a non-Titan like they did with the GK110 (780 Ti) because there wouldn't be a difference in double precision performance. They are simply milking the Titan brand now.

But this is just a hint of what's to come if nVidia is ever allowed to get even higher market share or even a monopoly.

Time for a joke before we take a look at the really heavy stuff.
Marketing. Remember the 970? Remember how I said that there's nothing wrong with what nVidia's engineer have done? Well there's some people who did fuck up. nVidia's marketing. And boy did they fuck up hard. There's a class action lawsuit now because some of them couldn't be bothered to check the 970's specs and instead copy pasted them from the 980. So keep that in mind, nVidia is not getting sued because the bandwidth is lower, but because the L2 cache size and ROP count they claimed was wrong.
Whenever you fuck up copypasting, remember it could've gone worse, you could've caused a multi million dolllar class-action lawsuit.

The software side. You are in for a wild ride, because here nVidia is doing all the wrong things for all the wrong reasons.
This will be quite short because it's explained more in-depth in the videos and there's really not much to add.
First up PhysX. It can make sense, but most of the time it doesn't. This should pretty much sum it up:
http://i.imgur.com/A9ZiK3M.gifv

Mirror's Edge relies on PhysX SDK to handle rigid body physics and ragdolls (on CPU)

These are TF2 levels of ragdoll physics fuck up. So it's not any better than even the shittiest implementations and it's not GPU accelerated. Prime example of when PhysX simple doesn't make any sense.
But even when it does make sense nVidia still found a way to screw over customers. Hybrid PhysX used to be a thing. Got an AMD card but want GPU accelerated PhysX? Just add an old nVidia GPU as dedicated PhysX card. Guess what nVidia did. They blocked PhysX completely if an AMD card is detected because "a high quality experience in such a scenario can’t be guaranteed.". Yeah right. If you're running a game on your nVidia GPU, don't even have AMD drivers installed then it's still blocked. Definitely because the AMD card would fuck up everything, but only when PhysX is used, otherwise it's completely fine, definitely not to sell more GPUs.

Next: GameWorks. Name a single game that actually saw performance improvements from using GameWorks. So why are they doing it?
1. It prevents AMD from being able to optimize drivers for those games.
2. It allows them to do whatever the fuck they want.

Which brings us to the last two points: Tesselation and sabotaging their own cards.
They especially like doing this through GameWorks but they've really been doing it whenever they can: Turning up tesselation. Why? Because going by tesselation performance it's Maxwell > Kepler > Fiji > Hawaii. So it turning it up hurts all cards, but it hurts even AMDs newest cards worse than their older cards. At this point you might still think it's just anticompetitive and Kepler is just collateral damage.
But then there's GameWorks. You should probably watch the video. Short version: They change some effects so they aren't compatible with the old renderers anymore, effectively rendering all renderer optimization for Kepler (and older) void. You'd have to believe nVidia is horribly incompetent, in that case you shouldn't buy their GPUs, or they are doing it on purpose and in that case if you're not planning on upgrading every two years you shouldn't buy their GPUs either if there's an alternative. That's definitely planned obsolescence.

Now that you know what bullshit nVidia has been up to lately you should be able to understand why I really don't recommend nVidia GPUs when there's an AMD alternative. Most of the time AMD offers a better price to performance ratio and the "nVidia drivers are better" argument doesn't really work anymore. If you paid attention to GPU releases then you probably know that the 980 Ti is faster than the Fury X. Wrong. A few driver updates later they switched positions. Same for the 780 Ti and 290X.
I'm not saying you should never buy nVidia cards, but there's only four reasons left:
-There is no AMD card at the price point you need or it's price to performance ratio is worse (which frankly never happens)
-You absolutely need an nVidia exclusive feature like CUDA.
-You want to overclock fairly high (you've got to admit than nVidia cards usually do have more headroom, mostly because of the lower power consumption / higher efficiency)
-You need the higher efficiency / lower power consumption

Think we're done? I've just started, but you need to know some of this to understand what I'm about to explain.
First, why is a new manufacturing process node such a big deal. That's pretty simple. A new node means smaller and faster and less power hungry transistors, that means smaller, faster and more efficient GPUs, smaller GPUs mean more GPUs per wafer (cost per wafer stays the same) and that means cheaper GPUs. So even if you change absolutely nothing, just take the same architecture you used on 28nm and build it on 16nm you're going to get more efficient and faster GPUs that are cheaper at the same time. Sounds great, right?
It's not all sunshine and rainbows though. Intel switched at 22nm, Samsung/Globalfoundries at what they call 14nm and TSMC at what they call 16nm to FinFETs. What those are and what they do isn't that important to know, but what it means is that you can't literally take the plans for your old chips just shrink them down and be done with it like in the golden days. You're looking at almost a complete redesign. This is the first thing on a long list of things that are good for AMD, but bad for nVidia. Why? Basically because AMD is strapped for cash. Maxwell is very well optimized for 28nm, AMD's GCN isn't because they could never afford to do it. On a new process nVidia loses those optimizations and will therefore see less of an efficiency improvement than AMD.
The next thing is memory. AMD already successfully implemented HBM, nVidia hasn't. AMD can focus on efficiency gains and moving to 16/14nm, nVidia still has to design some completely new stuff.
The same goes for all the DX12 hardware stuff. Not having any of that on Kepler and Maxwell was great on DX11, but they absolutely can't afford to keep it that way now.
The end result is that nVidia should have their hands full getting a working chip even without worrying about efficiency. I'm not taking any bets on which architecture will end up being more efficient but the difference should be negligible, that's two of the four reasons for nVidia cards gone.
But let's take a step back to "have their hands full getting a working chip". Right now there are some outlandish rumours going around that nVidia already got working chips, maybe even of the GP100 (aka big Pascal). That's bullshit. And there's a huge list of reasons why.
1. AMD has always been faster to new nodes.
2. AMD showed a working GPU at CES. nVidia did not. Why didn't they if they had working chips?
3. Instead they lied at CES about it. The last time they did that was with Fermi and that was a massive trainwreck.

EDIT: fixed the link
continued in next post

posted about 8 years ago
#1151 PC Build Thread in Hardware

I wanted to answer all of this yesterday but I didn't have time to watch the video.
Beware, huge wall of test incoming. This sort of turned into a huge rant about everything.

#1144
Yes. Mostly AMD Polaris though. Should be released around June/July. We're expecting an entry level chip and a mid range chip, both vastly more efficient than the chips they're replacing because it's the first process node change in 4 years and we're skipping one node altogether. But no one cares about efficiency right? Here's why you should be hyped anyway:
Mid range as in somewhere between 970/390 and 980/Fury. Now imagine a GPU that's more efficient than the 970/980 (the only downside of the 390), faster than the 970 and still cheaper than both 970 and 390.
Definitely something worth waiting for.

Now nVidia, that's complicated. This ties into
#1146
because a lot of this is explained in those videos.
Reasons why you should watch AdoredTV's videos:
- His accent is glorious and
- he's right.

Right now AMD's goals and decisions, both on the hardware, software, marketing and corporate side, happen to align with what's good for consumers. Most of it they aren't doing because they like us, rather because they have to. From our perspective they're doing the right things for the wrong reasons, which is not a bad thing for us.

On the hardware side of things nVidia is doing a lot of things that are technically (literally) the right thing to do, but aren't ideal for the consumers. It's not ideal, but if they had too large of an impact on performance you could always just vote with your wallet and force them to do something about it, right? We'll see about that later.
First some examples:
DX12. Kepler and Maxwell have basically no DX12 only features implemented in hardware. It does make sense since you can use all the saved space to improve DX11 performance but it also means that you won't benefit from DX12 on the GPU side (CPU side gains stay the same obviously). It's not exactly planned obsolescence, more like welcome obsolescence. They get better DX11 performance and don't have to spend money on engineering DX12 hardware before it's even released. Consumers being forced to upgrade earlier since their old cards don't won't get a boost and therefore a prolonged lifespan form DX12 is a pleasant side effect for nVidia but an unpleasant one for us. Whether or not they had that in mind doesn't really matter. Note that AMD didn't add asynchronous compute engines because they're so forthcoming and kind-hearted, but rather because they don't know how much longer they'll have to keep rebranding GCN architecture cards. They simply can't afford to develop two or three new chips per generation (it's been one or at best two for a while) so their chips have to last longer. Letting them become obsolete once DX12 was released just wasn't an option for AMD.

Double precision performance on the GM200. Traditionally nVidia has needed a bit more die space for the same performance as AMD. This isn't a problem when you can just make a bigger chip, although it increases cost. Both Fiji and the GM200 were balls to the wall ~600mm² chips. TSMC physically can not build a chip larger than 625mm², going bigger just isn't an option anymore. On the AMD side the DX12 stuff taking up some space that nVidia doesn't use is balanced by the GDDR5 memory interface being replaced with a far smaller and less power hungry HBM interface. nVidia didn't want to lose the performance crown to an architecture that even with the advantage of HBM is still less efficient simply because they couldn't fit enough units on a die, which meant that they had to cut something else. They choose douple precision performance. So while this made it possible for nVidia to produce a chip (GM200) that is more efficient and beats AMD (Fiji) although only barely, it puts the Titan X in a really awkward spot. The usual semi-professional segment that loved Titans before because of $5k Quadro DP performance for $1k doesn't have a reason to buy it when the 980 Ti offers almost the same at a far lower price and the Fury X offers more than double at an even lower price than that. More on that later under "corporate".

The 970 memory "issue" actually isn't a case of this. What they did allowed them to disable a part of the cache and ROPs which they otherwise couldn't have done. That meant higher yields and therefore a lower price. In fact the yields are so good that we still haven't seen a 960 Ti. With the GK104 we saw 3 GPUs: The full chip (680), the cut down chip with less shaders (670) and an even further cut down chip with less shaders, less ROPs, smaller L2 cache and smaller memory bus (660 Ti). With the GM204 there was never a need for the third one since they could cut down cache and ROPs without having to cut down the bus width. They didn't sell the 970 that "cheap" because it's broken, they spent a lot of money and effort on making it cheaper. I'm pretty sure they could sell the 970 even cheaper, worst case their cost is somewhere between a hypothetical "normal" 970 and a planned 960 Ti, best case it's exactly the cost of a 960 Ti which would sell at ~250$. Either way they're making a killing whereas the 390 is an improved 290 with more VRAM marked down 70$, that can't be good for your margin. I'm also pretty sure that this wasn't about GM204, rather about GM200. I'm sure nVidia is fine with making a couple of millions extra, but yields for a 398mm² chip were never going to be a problem. A 601mm² chip on the other hand, you want to use everything you can think of to improve your yields. I doubt that the yields are good enough for the 980 Ti that nVidia never even considered using it, it's more likely that they're afraid of the bad press if they "gimped" a high end card. My point is that the 980 Ti could be cheaper if people weren't so upset about something that isn't actually an issue. The driver did mask it pretty well, barely any performance impact in games and it took fairly long before anyone noticed something was up at all. I'm blaming bad journalism for this, clickbait titles and never bothering to explain that what they're getting sued over isn't the memory bandwidth but rather the other specs, more under "marketing".

On the corporate side they started doing stuff that was absolutely right from a business perspective and completely acceptable, for example the Titan, Titan Black and Titan Z. AMD couldn't beat the 780 so it was reasonable not to release the full GK110 chip since it is a very big (561mm²) and therefore very difficult and costly to produce chip. Releasing the Titan meant that you could get better performance (and double the VRAM) if you absolutely wanted/needed it at all costs, but you'd have to pay for it. The main point however was the you could get the double precision performance of a 5000$ professional grade GPU (Quadro K6000), although not the driver support, at a fraction (1/5) of the price. Once AMD released the 290X it got even better, the 780 Ti offered the full GK110 at a normal price while the Titan Black got you the double precision performance if you needed. Again at a premium over a gaming GPU, but still far cheaper than a workstation GPU.
And then came the Titan X. Again, holding back the full chip if there's no competition is fine. Not being able to fit higher double precision on it is fine too, it's already huge (601mm²). But they still wanted to make more money, so they forbade add-in board partners from manufacturing 12GB 980 Tis, because double the VRAM is now the only selling point of the Titan X. The 980 Ti and Titan X are so close that the added power draw of the doubled VRAM on the Titan can actually make it slower than the 980 Ti due to power budget restrictions. Yet they're still selling it at 1000$, asking 50% more just for double the VRAM and no other advantages.

continued in next post

posted about 8 years ago
#1141 PC Build Thread in Hardware

You want 50% more fps. There's no newer architecture available on AM3 and the fastest CPU got 10% higher clockrate. More than 4 (or even more than 3) cores do nothing in TF2. So unless L3 cache magically gets you 40% more fps I don't see it happening.

I'd probably go for i3-6100 + 2*4GB 2400MHz CL15 + GA-B150M-D3H.
Not possible without a new mobo.

1680x1050 on just high and not very high you could probably get away with a 260X (although I'd go with a 370 because it's almost the same price).

It would in theory possible to get both just barely if you get a Haswell CPU (e.g. i3-4170) instead and keep using your old RAM but I really don't like it. Sure if you say Crysis on high is the most demanding game you'll ever play it's fine. But what if one day you want to play it on very high or Crysis 2? So I'm really more in favour of a "proper" upgrade for CPU/mobo/RAM and then getting a mid range GPU (285/380) later, possibly after this year's new GPU releases in summer.

posted about 8 years ago
#1139 PC Build Thread in Hardware

#1139
TF2 should be possible easily, Crysis depends on the resolution and what you consider good settings.

The CPU obviously needs to be replaced for TF2, but the motherboard is a dead end as well, if the RAM is DDR2 it's useless, even if it's DDR3 I wouldn't keep it. Sell the whole bundle or just give it away.
You'll have to keep using the 6770 because your budget just isn't enough for CPU, mobo, RAM and a GPU on top of it.
If you do it the other way round you have to keep your CPU, so no improvement in TF2 but you could play Crysis at max.

EDIT:
To clear it up a bit:
Depending on what you consider good settings in Crysis and the resolution you might need a new GPU.
You definitely need a new CPU and mobo (and you should get new RAM) to get more fps in TF2.
You can not afford both. So tell me what settings/res you want in Crysis and if it turns out you would need a new GPU decide what's more important to you.

posted about 8 years ago
#1137 PC Build Thread in Hardware

#1137
There's no reason to upgrade just because it's old. Only upgrade if you want to improve some aspect of performance.
You should set some sort of performance goal.

Other than that some preliminary questions:
Want to / willing to overclock?
Need more storage capacity?
Want an SSD?
Case?

posted about 8 years ago
#7 hud_combattext_batching_window in Customization

Because that heavy took literally 1000 dmg and you want to prove it.

posted about 8 years ago
1 ⋅⋅ 155 156 157 158 159 160 161 ⋅⋅ 229