Use all bus data
Katsute opened this issue · 10 comments
Feature
Anytime a bus is requested:
- Check the cache
- Pull data for all busses
- Parse
- Fulfill the request
Reason
Current implementation runs a new request for every bus which is not efficient for enterprise. Requesting all busses then parsing is more efficient when submitting multiple requests.
Use similar binary search implemented in #90 against bus line not possible
Store raw data in cache, DO NOT STORE OBJECTS; will cause memory issues.
< 10,000 busses
https://bt.mta.info/wiki/Developers/SIRIVehicleMonitoring
Please note that the calls made without either a VehicleRef or LineRef produces quite a load on the system, so use them sparingly. Any developers found to be making repeated calls (e.g. at less than 30 second intervals) for all vehicles in the system may find their key revoked.
Only this subclass needs to be modified
Stop monitoring should derive values from vehicle request if possible.
JSON parser needs to be massively optimized to avoid memory issues; use JIT (just-in-time) parsing, where we load immediate keys into memory, then parse the subsequent object when requested.
This count needs to be optimized to only run once: not possible due to split logic
https://sites.google.com/site/gson/streaming
https://stackoverflow.com/a/11876086
Regex may actually be more inefficient since we are rescanning the string multiple times. Use character loop stream instead.
Return a method to parse for:
OneMTA/src/main/java/dev/katsute/onemta/Json.java
Lines 197 to 200 in f3dfe30
and
OneMTA/src/main/java/dev/katsute/onemta/Json.java
Lines 235 to 238 in f3dfe30
@mashiro-san create a branch upgrade-json
@Katsute I have created a new branch upgrade-json@f3dfe30
- get methods need to be optimized so synchronized isn't blocking threads
- pull bus stop alert info from gtfs feed
- derive stop vehicles from all vehicles
Use CopyOnWriteArrayList to allow concurrent reads and locked writes.
In order to make write thread safe, have synchronized write block also check if its expired before fetching (any current active writes will make the result not expired, any queued writes will see that the resource is no longer expired and will return the new one instead).
Get first then check if expired to prevent concurrency issues.
@mashiro-san create a new branch upgrade cache
@Katsute I have created a new branch upgrade-cache@ebf9c8e