Original Post — Direct link

Hello, I just want to hear your opinion about FPS in this game. I know this is a beta, but guys do you honestly think that FPS will get better?

I have same FPS at all settings, maybe 10-15 FPS difference. I play with 150-100 FPS. Sometimes 200, sometimes it drops to 80. Huge FPS drops. I hope it will get better.

External link →
about 4 years ago - /u/RiotArkem - Direct link

Originally posted by PlataBear

It's because of some internal issues with valorant. One of the devs made a security change to how an executable worked and it had a drastic impact to fps that hasn't been fully resolved yet. He said it was reverted and fps should return to normal, but clearly that's not the case because before I was getting 240-350avg and now I barely stay above 144.

The security change that was causing performance issues has been well and truly fixed... but there were several other changes that have slipped in causing performance to regress.

Right now we've increased the number of people working on performance issues and so we've got some improvements coming in the next patch. We're also identifying and planning out longer term work to improve performance, especially on high end computers.

It's hard to say for sure but I expect that performance after this next patch will be similar to the performance of the first beta version and then the patch after that should be overall better. (Your mileage may vary, this is just an estimate, etc, etc)

about 4 years ago - /u/RiotArkem - Direct link

For most players the limiting factor on your frame rate will be single threaded CPU performance. Unfortunately most of the settings in the game impact GPU utilization so don't end up making a lot of difference to game performance.

We're working on it though, next patch will see some improvements and the next couple of patches will make it better again. The game will probably remain CPU bound for the foreseeable future but we do want to remove frame drops and improve the frame rate on high end gaming PCs.

about 4 years ago - /u/RiotArkem - Direct link

Originally posted by MattackChopper

The game will probably remain CPU bound for the foreseeable future

Can you elaborate on why this game is so CPU reliant? I have a Ryzen 5 2600 and I have been getting a pretty consistent fps of around 144 to match my monitors refresh rate, but as others have pointed out that graphics heavy assets( Sages slows and walls, a lot of smokes in one area, gunshots from multiple players in one area) are what is causing the framerates to drop to sometimes lower than 60fps.

Is the issue due to textures not rendering fast enough and causing the drops if your CPU can't keep up? Or is it something more involving the "fog of war" trying to decide what to render? I'm not yelling at you to fix it, God knows you have enough of that, but the more info we have on our side makes it easier for people to try to optimize to the best of our ability. Any insight into how the game works would help us greatly into figuring out how to make it run smoother.

I love this game and I think most of the people who are screaming that the sky is falling love it too it they don't express it well or politely but we all just wanna play this amazing game you've created to its fullest potential.

The reason the game frame rate isn't heavily reliant on GPU performance is that we took some extreme measures during the development of the game to keep all the art performant (art style, polygon/texture/bone budgets, etc).

On the CPU side there are many contributing factors, there's no one thing that's the problem. That said, the biggest outlier right now is probably that our UI isn't very optimized (partly because it was one of the last things to get built). This is one area that we're spending a lot of time on at the moment looking for improvements.

Some of the other reasons include Unreal Engine being relatively single threaded unless you put a lot of work into it (so it only uses a couple of cores fully by default). Another factor that's more this game specific is the high tick rate of the servers which leads to a lot of processing of incoming and outgoing packets. Likewise there are several simplifying assumptions that some games can make that we don't due to gameplay fidelity requirements (like using complex collision rather than simple hitboxes).

But the TL;DR is that CPU performance isn't as good as we want it yet and the underlying reason for that is that we haven't put enough work into optimizing it. We're going to put more work in and so things should start getting better.

about 4 years ago - /u/RiotArkem - Direct link

Originally posted by InLoveWithInternet

Sorry I’m late in the discussion.

the limiting factor on your frame rate will be single threaded CPU performance

Why? This is a serious question. This sounds a bit crazy to me in 2020. We have multi-cores/multi-thread cpus for a long (long) time. We definitely see Intel/AMD going into more and more cores. Why do game developers still make games that rely on single thread performance and can’t benefit from the crazy horse power multi-cores cpus bring on the table?

The industry is getting there but it's hard because there's a lot of state in a video game and each object that needs to be updated is often dependent on a lot of other objects.

Modern engines are building task dependency graph systems that let subsystems updated independently from each other on different threads but for many engines these subsystems aren't first class citizens and a lot of programming effort is required to get them working.

To process a frame you generally have to read network updates from the server, read input from the player, use this information to update the state of each game object, prepare a list of things to be rendered and send that list to the graphics card.

You can add some threads here, for example you often have a input polling thread and a network packet handling thread (checking for input/messages etc). There's also often a rendering thread that is responsible for shuffling objects to the graphics card when they're ready to be rendered.

However everything in the middle between receiving input and rendering output takes a lot of time (especially in VALORANT). This is where all the game objects are simulated. In theory we could simulate each game object on a different thread (doing 8 or 16 or however many at a time). In practice it becomes very easy for the objects to depend on each other and for many objects to depend on common state.

As a completely made up example:

To render a character on screen you need to calculate their animation pose, which depends on their movement speed (run vs walk) which depends on both their location, input and gameplay buff state. So all those will need to be updated before animation... except the character speed also depends on the location of every other object (collision) so we have to update all locations before any of the animation before we're ready for rendering.

In practice it is very easy for these dependencies are shared state to lead to a theoretically multithreaded system devolving into a single threaded one.

All hope isn't lost though! Game engines are finding better ways for programmers to split up work. Unreal Engine today is much better at this than Unreal Engine 5 years ago and newer engines like the Overwatch engine have put a lot of effort into this problem (see this video https://www.youtube.com/watch?&v=W3aieHjyNvw)

Right now there's a lag time from PC hardware trends, concurrency programming research progress, game engine design and then games taking advantage of the tech. Experimental features in an engine (like some of the cool stuff in the Unreal Engine 5 demo) might not become mainstream in AAA games for most of a decade. (Edit: I've been playing X-Com: Chimera Squad and that's probably an Unreal Engine 3 game released in 2020)

Games take a long time to make and because of it development teams are often risk adverse so even if new tech is available it won't be adopted until there are clear incentives to do so. Sometimes that's because you're starting from scratch so there's not a lot of old systems to re-engineer (e.g Overwatch), sometimes there's a specific gameplay reason to adopt new tech (maybe there's no way to do a crazy universe scale physics sim without a great threaded physics solver) or just platform requirements (maybe your Xbox Series X game won't run at 60fps unless you're running across all of its cores). Once the tech is adopted though it usually sticks around through future projects because everyone's familiar with its pros and cons so it's not a production risk anymore.

So I guess I can't tell you when VALORANT will be massively multithreaded but don't be shocked if other games in a few years time start using all those threads that AMD is cramming into their processors :)

This got way longer than I expected and I haven't proofread any of it so hopefully it makes sense and I'm not just very wrong.

So um... thanks for coming to my ted talk I guess?

about 4 years ago - /u/RiotArkem - Direct link

Originally posted by InLoveWithInternet

Thanks a lot for your detailed answer, and actually it would make a great ted talk yea :)

I understand your points, it’s quite logical that if you have lot of things inter-linked it’s even harder to make the whole program multi-threaded. And I’m sure it becomes complex quickly.

But I also think that the game industry in general didn’t realize how fast the speed limit would be reached and so that we would be forced to make more cores instead of more frequency. I may be completely wrong since I’m not in the game industry, but from an outside perspective it feels like the problem hasn’t been worked on early enough.

Worse, I’m not sure it’s actually a priority. I can understand people want to mitigate risks etc. but also maybe there are some people (management, business, maybe dev themselves) who don’t realize where we are. And where we are for more than a decade now.

You look at benchmarks, you see those massive power house that are the new threadrippers and you hear the reviewer say something like « if you are a gamer this is not the cpu to get because games are mostly single threaded » then you look at the new Ryzen 3950X and it has 16 cores! Same on Intel side, even if they perform relatively better on single thread applications they still pack a lot of cores. The issue is not in the cpus, it can’t be in the cpus, the issue has to be in the way we use the resources they provide. And we don’t game or work, we do both and it will be more and more the case in the future.

And I think games will be judged more and more based on those optimisations because people are now more and more aware of those technical aspects (the fps they got, the tick rate, the latency, etc.). I was playing cs 1.6 at 100fps 10 years ago with a cpu that was already multi-core, we should not have 144Hz or even 250Hz today on the last cpus, we should have so many fps the question would not even exist.

There is something frightening that Cloudflare can handle millions and millions users based on those cpus and games can’t load them correctly. Yes I get that they have lot of nodes and that they use network balancing and also that users are not « linked » so they don’t have the same issues games have but still..

You're not wrong! Sometimes it isn't a priority, some projects just don't really care about CPU performance.

Everything is a trade off so it's not too surprising that my hypothetical dating sim isn't trying out new tech to try and use all cores and get to 1000fps. Big name cinematic console games are also happy running at 30fps (though these are normally GPU bound so they're preferring higher resolutions over high frame rates).

In fact one reason we don't see more progress in multi-core computation for games is that most modern games are limited by GPU power rather than CPU power. Spreading the game work over multiple CPUs might not even help the frame rate if you're stuck waiting for the GPU to do its thing.

In most games changing your graphics settings will have a massive impact on your frame rate, this generally means that the GPU is the limiting factor. It's actually rare to have a game like VALORANT where the graphics settings don't actually do a lot for your frame rate because more people are CPU limited.

When making VALORANT we had two performance goals in mind, firstly we wanted as many computers as possible to run the game and secondly we wanted players on good PCs to be able to run at >120fps.

For the first the biggest factor was choice of rendering technologies, we chose an old shader model to support older video cards, built a forward renderer which is really fast but doesn't support as many cool lighting effects as modern deferred renderers and chose asset limits to fit constrained VRAM systems. This means that someone with Intel HD4000 can play but doesn't really help people with modern gaming graphics cards.

For the second goal of high frame rate play we started optimizing the game update loop. It was relatively easy to hit our initial goals on modern PCs so we started focusing on older and older PCs. We spent a lot of time trying to increase the number of computers that could hit 120fps (or 60fps for even older machines) and it's only recently that we've turned out attention back to the very high end.

Maybe this was a mistake and we should have spent more time earlier on this but our thinking was that the game feel difference between 30 and 60fps is bigger than the difference between 60fps and 120fps which is bigger than the difference between 120fps and 240fps.

Another area we have started putting more effort into is frame drops during combat. This is super important because having your frame rate drop from 200 to 100 when it matters most (combat) is very disruptive to gameplay. One of the reasons this hadn't gotten more attention earlier is that it's harder to measure than average frame rates. We have really good metrics for what the mean and median frame rates for players are (by map, hardware, region, etc) and good stats for hitch detection (e.g. when a frame takes 10x longer than normal to process for some reason) but our aggregate stats aren't great at recording general slow downs and it turns out that it's much easier to optimize what you can measure. We're working on it now and the next two patches should have improvements here but I think this won't be fixed for the long term until we're happy with our metrics as well as a current performance.

I guess weekends are just monologue time for me, here's another post that got longer than I thought it would :)