Target recognition | Recognize targets from a visual input (e.g, screen) |
Trigger | Execute operations when a target is recognized |
Aim-droid | Move the mouse cursor onto the target with human-like movements |
Blazing fast performance | Target recognition takes just 7 to 19ms (roughly 100 recognitions/second) |
Shoots when the crosshair is on the recognized target's area.
Some of the parameters/criterias that can be set : shooting rounds with a delay between them, shooting rapid bursts of bullets, and performing a check on the current accuracy status, in order to shoot only when it is at it's peak.
A visual representation of the target that entered the crosshair and triggered the shooting response. It is not pixel-perfect, as it would require much more computations and would result in a slower recognition/second ratio, in this example the average is a 100 recognitions/second, resulting in a blazing fast shooting response when the crosshair is in the area of a recognized target.
Recognizes the enemy's guard position and copies it with blazing fast speed. The whole process of acquiring the screen data, recognizing the enemy's guard position and sending the input needed to assume the same position takes only from 4 to 7 milliseconds. The dynamic values on the screen are printed using pixel_caster.
Recognizes all enemy's offensive and defensive stances : guards, attacks, faints, charges, guardbreaks, etc... A high chance of offensive response is set to be following a successful parry, dodge or guardbreak recognition.
Imports a .png file containing the target to be recognized. The image will be scanned and the resulting data will be used to create a range of targets both upscaled and downscaled, in a provided range of sizes.
_ing Vision has now a Graphical User Interface for smart target creation and recognition testings. The shown GUI is made with the egui GUI library for Rust. A version made with iced also exists, but it currently lacks some image rendering features already present in egui.
_ing Vision in now able to visually recognize text displayed on the screen. The set of characters to recognize can be provided either from a sample .png image or from a .ttf font file, the latter is preferred, since it increases the recognition precision of characters with sizes that widely differ from those provided by a sample image, thanks to the fonts' rescaling capabilities.