Upvote Upvoted 0 Downvote Downvoted
rahThread: AI players
1
#1
0 Frags +

With the rise of AI power and popularity, could TF2 bots be trained using neural networks and play similiarly to real players? I.e. Could we get a team or a full server of tf2 bots, with varying degrees of skill level, as practice for real players?

With the rise of AI power and popularity, could TF2 bots be trained using neural networks and play similiarly to real players? I.e. Could we get a team or a full server of tf2 bots, with varying degrees of skill level, as practice for real players?
2
#2
9 Frags +

cant wait to join the midfight drill server to bomb into 2 ai aimbotting scouts

cant wait to join the midfight drill server to bomb into 2 ai aimbotting scouts
3
#3
72 Frags +

tf2center has implemented this technology for years

tf2center has implemented this technology for years
4
#4
7 Frags +

tf2center has implemented this technology for years

tf2center has implemented this technology for years
5
#5
-9 Frags +

but there is one already, me

but there is one already, me
6
#6
-1 Frags +

don't give the bot developers ideas rahmed, infesting pubs with ultra prem players

don't give the bot developers ideas rahmed, infesting pubs with ultra prem players
7
#7
4 Frags +

I’ve thought about related topics in this field (multi-agent reinforcement learning) before, so here’s a partially thought out answer if you really care.

The most relatable result would be in 2019, when OpenAI Five beat the reigning Dota 2 TI Champions OG in a BO3. This was a big deal, since not only is Dota 2 a complex game, the AI would have had to navigate cooperation between players on the team while working adversarially against others. However, it’s kind of hard in practice to create a TF2 version of this, even if you ignore sniper/scout. The primary barrier is data/computational power. The idea is that the AI learns by playing games against itself repeatedly. The amount of computational power needed to have the AI get to a pretty high level is most likely too expensive for peons. Not only is the training expensive, it’s also not cheap to consistently run the AI and get them to play games against players. I also don’t know how convincing the Dota 2 match was since I don’t play that game (it may have suffered the same hallucinations as the early AlphaGo versions), but generally if you want to improve performance, you just need to run more iterations of self-play, which would cost even more resources. Now maybe there exist people that have access to these resources and are able to design a reinforcement learning algorithm for TF2, so it’s theoretically possible.

However, it’s quite an open problem to create an AI that “mimics” real players in these complex problems. When these AI are trained, they are aiming to win the game, not to act like a player at some level. Thus, if you are able to play against such AI as described before, I think you could learn a thing or two, but you could definitely feel that you are playing against bots and not humans.

Let me know if you have any questions!

I’ve thought about related topics in this field (multi-agent reinforcement learning) before, so here’s a partially thought out answer if you really care.

The most relatable result would be in 2019, when OpenAI Five beat the reigning Dota 2 TI Champions OG in a BO3. This was a big deal, since not only is Dota 2 a complex game, the AI would have had to navigate cooperation between players on the team while working adversarially against others. However, it’s kind of hard in practice to create a TF2 version of this, even if you ignore sniper/scout. The primary barrier is data/computational power. The idea is that the AI learns by playing games against itself repeatedly. The amount of computational power needed to have the AI get to a pretty high level is most likely too expensive for peons. Not only is the training expensive, it’s also not cheap to consistently run the AI and get them to play games against players. I also don’t know how convincing the Dota 2 match was since I don’t play that game (it may have suffered the same hallucinations as the early AlphaGo versions), but generally if you want to improve performance, you just need to run more iterations of self-play, which would cost even more resources. Now maybe there exist people that have access to these resources and are able to design a reinforcement learning algorithm for TF2, so it’s theoretically possible.

However, it’s quite an open problem to create an AI that “mimics” real players in these complex problems. When these AI are trained, they are aiming to win the game, not to act like a player at some level. Thus, if you are able to play against such AI as described before, I think you could learn a thing or two, but you could definitely feel that you are playing against bots and not humans.

Let me know if you have any questions!
8
#8
3 Frags +

What's Ur shoe size

What's Ur shoe size
9
#9
9 Frags +

ramble ramble ramble
one of the issues with AI is it will be really lame in TF2 unless you set strict rules

scouts, heavies and snipers will be aimbots
demomen will be autodetting. a neural network would take the easy route of standing still looking at stickies instead of learning pipe aim
pyro instant reflects
even soldiers would just waddle and shotgun switch because aimbotting with shotgun requires much less of the network to be rewarded if you don't set clear rules

only way I could see neural networks seem "entertaining" is by forcing soldiers on gunboats and rewarding positioning somehow (height difference? timed rewards with destinations?)

Medics probably would be most interesting if they learn good positioning somehow with a reward system that rewards both its own safety and healing multiple teammates

gunboats ultiduo/passtime would probably be the most practical application (other than MGE but nobody wants to see that)

ramble ramble ramble
one of the issues with AI is it will be really lame in TF2 unless you set strict rules

scouts, heavies and snipers will be aimbots
demomen will be autodetting. a neural network would take the easy route of standing still looking at stickies instead of learning pipe aim
pyro instant reflects
even soldiers would just waddle and shotgun switch because aimbotting with shotgun requires much less of the network to be rewarded if you don't set clear rules

only way I could see neural networks seem "entertaining" is by forcing soldiers on gunboats and rewarding positioning somehow (height difference? timed rewards with destinations?)

Medics probably would be most interesting if they learn good positioning somehow with a reward system that rewards both its own safety and healing multiple teammates

gunboats ultiduo/passtime would probably be the most practical application (other than MGE but nobody wants to see that)
10
#10
40 Frags +

just got done training the neural network on roamer. based on what im seeing here it looks like every mid you should bomb the medic and hit him with two direct rockets. when the scout jumps at you direct him as well. and then when both the soldiers bomb airshot both of them. the demoman is gonna be there as well so you might as well hit him with three directs

just got done training the neural network on roamer. based on what im seeing here it looks like every mid you should bomb the medic and hit him with two direct rockets. when the scout jumps at you direct him as well. and then when both the soldiers bomb airshot both of them. the demoman is gonna be there as well so you might as well hit him with three directs
11
#11
5 Frags +

If one were to go to the reinforcement learning road, i.e. trying to teach a model to play the game, the first barrier is properly interfacing with the game while avoiding latency, etc.. My first instinct would be using a server-side plugin and translate the state of the server into an interactable RL environment for the agent. Another road is to use convolution and human footage if doing supervised learning or directly the game if doing unsupervised. Once again, there are numerous technical barriers, you would have to sample the video feed into images at a reasonable rate, and deal with the latency of the game vs the actions taken by the network (which could get worse if you wanted to speed up the training by rendering the game several times). Convoluted neural networks are in my opinion less likely to reach satisfying levels of gameplay, but it has been done in Minecraft to a certain degree of success with simple tasks(https://minerl.io/).

However, an approach that could yield a lot of interesting results is position evaluation. Similar to the state of a chessboard, a demo records each packet, and you can fairly easily get a .json representation of a game. One could realistically build a dataset of games, covering a large number of situations, and train a neural network to evaluate how good the position is for a given team. This could lead to players reevaluating how good certain plays are, or how advantageous certain components of a fight may be. For instance, number and uber advantage are the two main ways stalemate situation are dealt with. But maybe a fine-tuned model would give us a more definite idea of who has the theoretical advantage in more ambigous situations, say you lost 2 players for a force and are pushing into a point with uber ad. Key here is all in the data, finding "good matches", use smart features to represent the game correctly..

If one were to go to the reinforcement learning road, i.e. trying to teach a model to play the game, the first barrier is properly interfacing with the game while avoiding latency, etc.. My first instinct would be using a server-side plugin and translate the state of the server into an interactable RL environment for the agent. Another road is to use convolution and human footage if doing supervised learning or directly the game if doing unsupervised. Once again, there are numerous technical barriers, you would have to sample the video feed into images at a reasonable rate, and deal with the latency of the game vs the actions taken by the network (which could get worse if you wanted to speed up the training by rendering the game several times). Convoluted neural networks are in my opinion less likely to reach satisfying levels of gameplay, but it has been done in Minecraft to a certain degree of success with simple tasks(https://minerl.io/).

However, an approach that could yield a lot of interesting results is position evaluation. Similar to the state of a chessboard, a demo records each packet, and you can fairly easily get a .json representation of a game. One could realistically build a dataset of games, covering a large number of situations, and train a neural network to evaluate how good the position is for a given team. This could lead to players reevaluating how good certain plays are, or how advantageous certain components of a fight may be. For instance, number and uber advantage are the two main ways stalemate situation are dealt with. But maybe a fine-tuned model would give us a more definite idea of who has the theoretical advantage in more ambigous situations, say you lost 2 players for a force and are pushing into a point with uber ad. Key here is all in the data, finding "good matches", use smart features to represent the game correctly..
12
#12
3 Frags +

chess computer overlay live updating on casted games would be fire. tracking how many shots people have loaded etc and watching the number change every few hundred ms would be crazy. some cs casts used to do odds on who wins a developed round (1v4s etc), in tf2 people are constantly dying, spawning, building and using uber as well as shooting their guns, so the numbers would be more volatile

chess computer overlay live updating on casted games would be fire. tracking how many shots people have loaded etc and watching the number change every few hundred ms would be crazy. some cs casts used to do odds on who wins a developed round (1v4s etc), in tf2 people are constantly dying, spawning, building and using uber as well as shooting their guns, so the numbers would be more volatile
13
#13
19 Frags +

imagine having a robot in tf2 that gives you a blunder pop up whenever you bomb a med and miss 3 rockets

imagine having a robot in tf2 that gives you a blunder pop up whenever you bomb a med and miss 3 rockets
14
#14
7 Frags +

tftv casts need an eval bar

tftv casts need an eval bar
15
#15
0 Frags +

A live match evaluation bar would be actually incredible. I do wonder if a similar kind of technology could find the so-called diamonds in the rough and lead to a moneyball scenario where a team of previously misjudged players forms some exodia roster

A live match evaluation bar would be actually incredible. I do wonder if a similar kind of technology could find the so-called diamonds in the rough and lead to a moneyball scenario where a team of previously misjudged players forms some exodia roster
16
#16
15 Frags +
RedTPCtftv casts need an eval bar

"froyotech has full uber going into this last push but stockfish says it's even"

[quote=RedTPC]tftv casts need an eval bar[/quote]
"froyotech has full uber going into this last push but stockfish says it's even"
17
#17
-7 Frags +

On the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?

On the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?
18
#18
EssentialsTF
1 Frags +
RahmedOn the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?

rahThread in a rahThread...smh this guy

[quote=Rahmed]On the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?[/quote]

rahThread in a rahThread...smh this guy
19
#19
8 Frags +
RahmedOn the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?

ah, yes, my own copy of mastercoms's consciousness running on my gpu, very handy

[quote=Rahmed]
On the note of AI, nvidia have released some sort of chat bot from it's line of gpu's
With the inevitable AI pc hardware integration, do you think there could be a time where you just ask your computer to give itself a high fps config? Maybe adjust certain changes on the fly without human input?
[/quote]
ah, yes, my own copy of mastercoms's consciousness running on my gpu, very handy
20
#20
5 Frags +

its valentines day and ur talking to ur gpu

its valentines day and ur talking to ur gpu
Please sign in through STEAM to post a comment.