The industry is getting there but it's hard because there's a lot of state in a video game and each object that needs to be updated is often dependent on a lot of other objects.
Modern engines are building task dependency graph systems that let subsystems updated independently from each other on different threads but for many engines these subsystems aren't first class citizens and a lot of programming effort is required to get them working.
To process a frame you generally have to read network updates from the server, read input from the player, use this information to update the state of each game object, prepare a list of things to be rendered and send that list to the graphics card.
You can add some threads here, for example you often have a input polling thread and a network packet handling thread (checking for input/messages etc). There's also often a rendering thread that is responsible for shuffling objects to the graphics card when they're ready to be rendered.
However everything in the middle between receiving input and rendering output takes a lot of time (especially in VALORANT). This is where all the game objects are simulated. In theory we could simulate each game object on a different thread (doing 8 or 16 or however many at a time). In practice it becomes very easy for the objects to depend on each other and for many objects to depend on common state.
As a completely made up example:
To render a character on screen you need to calculate their animation pose, which depends on their movement speed (run vs walk) which depends on both their location, input and gameplay buff state. So all those will need to be updated before animation... except the character speed also depends on the location of every other object (collision) so we have to update all locations before any of the animation before we're ready for rendering.
In practice it is very easy for these dependencies are shared state to lead to a theoretically multithreaded system devolving into a single threaded one.
All hope isn't lost though! Game engines are finding better ways for programmers to split up work. Unreal Engine today is much better at this than Unreal Engine 5 years ago and newer engines like the Overwatch engine have put a lot of effort into this problem (see this video https://www.youtube.com/watch?&v=W3aieHjyNvw)
Right now there's a lag time from PC hardware trends, concurrency programming research progress, game engine design and then games taking advantage of the tech. Experimental features in an engine (like some of the cool stuff in the Unreal Engine 5 demo) might not become mainstream in AAA games for most of a decade. (Edit: I've been playing X-Com: Chimera Squad and that's probably an Unreal Engine 3 game released in 2020)
Games take a long time to make and because of it development teams are often risk adverse so even if new tech is available it won't be adopted until there are clear incentives to do so. Sometimes that's because you're starting from scratch so there's not a lot of old systems to re-engineer (e.g Overwatch), sometimes there's a specific gameplay reason to adopt new tech (maybe there's no way to do a crazy universe scale physics sim without a great threaded physics solver) or just platform requirements (maybe your Xbox Series X game won't run at 60fps unless you're running across all of its cores). Once the tech is adopted though it usually sticks around through future projects because everyone's familiar with its pros and cons so it's not a production risk anymore.
So I guess I can't tell you when VALORANT will be massively multithreaded but don't be shocked if other games in a few years time start using all those threads that AMD is cramming into their processors :)
This got way longer than I expected and I haven't proofread any of it so hopefully it makes sense and I'm not just very wrong.
So um... thanks for coming to my ted talk I guess?