Caching System
Caching System
The caching mechanism in the system is designed to request only what is not yet in the cache, down to the column level. The received data updates the cache, and the cache clearing system removes all dependent data from all related entities whenever data changes. This is achieved because the system thoroughly knows all data relationships based on profiles.
The mechanism works autonomously and does not require developer intervention; however, there are options that may be useful.
You can completely disable the mechanism by specifying "useCache": false in the config.
You can disable caching for a specific request by passing "use_cache"=false in the request parameters (it happened that one case uses camelCase and the other uses snake_case).
This parameter relates specifically to the request, not to the method parameters—meaning the system receives it from r.rParams rather than from r.data.params. However, you can pass it in the method parameters, and it will be automatically moved to the correct location.
The cache is currently stored in a prepared object in the main process memory. However, it is designed to be moved to a separate service, such as Redis, to allow multiple instances to run with a single cache.
Not yet fully implemented
At the class level, after the profile is loaded, you can define (for example, in the overridden init method) the cache_max_length field (in the profile: this.class_profile.cache_max_length). It determines the maximum number of rows that can be stored in the cache for this class. The parameter exists, but the mechanism for clearing records beyond the limit (the oldest ones in the cache) is not yet implemented. Additionally, during further development, it would make sense to move cache_max_length to the profile in the DBMS—i.e., to extend the class_profile class structure.
Currently, the system does not control memory overflow. It will be necessary to add control for the maximum memory limit specified in the configuration and clear older data as needed.