You're conflating a few issues. Let's go through each of them.
pancake_stacksI never ignored anything
pancake_stacks And the material you're telling me to read has this comment lol:
You can think of the interpolation buffer (lerp) length as additional time added to your ping. It's in your best interest to keep lerp as low as possible.
You ignored everything else in that material saying why the lowest interp (15.2ms) is not usable. That's what I was referring to.
pancake_stacksIs it really hard for you two to grasp if someone sees 0 lerp on their netgraph, that they are going to perceive that's the value set?
Your perception isn't the thing that matters here. It's the fact that you made an argument (10ms interp is too high/0ms is better) based on a false perception, and despite being corrected, you continue to make that argument. Your feelings are entirely based on placebo with no actual reasoning and cause you to use harmful network settings. And despite numerous prompts from Degu and me for you to bring up other reasoning, based on networking logic, you instead essentially say lol a bunch as your argument.
pancake_stackswhy are you even providing a 0 lerp setting in your config for lan? Does LAN play bypass this invisible hardlock?
There is no 0 lerp setting in my config. You have to realize that interp is determined from cl_interp_ratio and cl_interp. The greater of the two values is used. So, cl_interp_ratio is set to 1, which means, at a 66 update rate, you get 15.2ms lerp.
pancake_stacksWhen I create a LAN server and join from a laptop on my network, the lerp is hard locked at 15 with the same settings I used to get 0 lerp on a community server. LOL
That's because on a local/LAN server, the client and server are synced in a different way, so none of that matters. Again, if you understood what interp is (the time it takes to smooth between server updates), you would know that 0 interp would make everything jittery and jumpy with no continuous motion.
pancake_stacksAnd it's funny talking about understanding things and people using wrong commands in their configs, etc - considering back when I asked you about the net_splitraterate commands in your cfg being server side only commands, you didn't even know if they worked or not for the client, but hey, might as well throw them in there, right? It's funny considering there was talk about placebo earlier.
The difference here is that the net_splitrate was based on logical intuition (server/client using the same packet code), rather than placebo prone feeling. I just could not demonstrably prove with absolute 100% certainty that the client was using that packet code path, as even with decompilation, the networking paths used are not clear due to legacy code/dead code that I can't really isolate.
The larger problem here is that you're comparing something that is harmless if it doesn't work and beneficial if it does work (net_splitrate >1), with something that is harmful if it doesn't work and harmful if it does work (interp <=15.2ms). Perceived benefit in my case has no negative impact on the conclusion, as there are no negative effects, yet in your case, perceived benefit on something that demonstrably with 100% confidence does not work, is actually harmful.