* `logger.INFO` will, at best, be a constant, it should be `logger.info()`.
* When we're not interested in the `new_data` 2nd part of the tuple from
`killswitches.check_killswitch()` we can't use `_` as there's a potential
class with the `l10n.py` injection of `_()` as a builtin.
And you can't declare types withing first-use in a return-tuple. So, declare
them on their own lines, with throwaway default values instead.
* Added `capi.request.<endpoint>` killswitches at appropriate call points.
* Added `eddn.capi_export.<type>` killswitches. This allows for killing
just the EDDN export of such CAPI-derived data, without stopping the actual
queries, as other plugins/functionality might still have harmless use of
the data.
* PLUGINS.md: Actually describe the contents of `data` passed to plugins, and
point out it might not always contain market or shipyard data. This is
not only because of the new killswitches, but could already have happened
if the station/port docked at didn't have the services.
* Some misc typing cleanups.
* eddn: Don't schedule `queue_check_and_send()` if EDMC_NO_UI.
* `export_(commodites|outfitting|shipyard)` lost the `is_odyssey` argument
in 556ace5306bebbcf34c1a56a9023a822218a73f1 .
* EDDNSender: Helper `set_ui_status()` in which the check for EDMC_NO_UI
is performed. Used in `send_message()`. In the EDMC_NO_UI case it will
INFO log the text instead.
* This did, however, remind me that the `data` passed into `cmdr_data()`
is an amalgam of `/profile`, `/market` and `/shipyard` queries.
This means that the data.source_endpoint is **not correct for all of
the data and its use**. As such I had to pass 'hard coded' values into
the function from the various CAPI export functions. They know what it
is they're exporting.
* As this reminded me that "CAPI `data` is actually a `CAPIDATA`", I've
documented that in PLUGINS.md, but with a dire warning against relying on
any of the extra properties.
* In theory we would always see `Fileheader` and clear `pending[]`, but let's
be extra paranoid and also clear it if there's a gameversion/build difference
between the prior event and the current one.
1. Due to the _TIMEOUT on the actual `post()` of a message it would be
possible for new entries to get queued in the meantime. These queued
entries could be 'in session' and end up going through pending and thus
sent before one of the 'new session' events is detected so as to clear
pending. The `this.gameversion/build` could have changed in the meantime,
so are no longer correct if game client changed.
2. So, pass in the current gameversion/build when a message is pushed into
the queue, and parse those back out when they're pulled out of the queue.
3. Use those versions in the message, not `this.` versions.
* Record the 'state' version of these in `this`.
* Use those when constructing the message.
* NB: Need to check if messages can be retained in the queue across client
changes. Coming up ....
* Adds `monitor.is_live_galaxy()` for general use.
* Assumes Update 14 starts after 2022-11-29T09:00:00+00:00. That's the
currently schedule day, and recently the servers have been down by the time.
Likelihood of them coming back *up* quickly seems slim to none.
* If we couldn't parse the `gameversion` from Journal using
`semantic_version.Version.coerce()` this will fail, and assume we're on
the Legacy galaxy.
Whilst setting it to the same "CAPI-<endpoint>" string as `gameversion` in
these cases would probably be OK, that's not the intent of the EDDN
documentation, which has now been clarified.
* In case of apparent issues, have a `--trace-on` to better see what's (not)
happening.
All the old DBEUG logging, even if commented out, is now under this.
* Also added some INFO level logging for the legacy replay.jsonl conversion,
as it should be one-time per user.
* Some additional DEBUG logging for closing down.
* An aborted attempt was made to use a thread worker, but:
1. sqlite3 doesn't allow cross-thread use of the same sqlite3 connection.
2. Having an on-going query on one cursor, e.g. gathering all the
outstanding message `id`, whilst trying to DELETE a row hits a
"database is locked" error.
* So, back to tk `after()`. `send_message_by_id()` has been audited to ensure
its boolean return is accurate. So there shouldn't be any way in which to
get hung up on a single message *other than if the EDDN Gateway is having
issues, and thus it should be retried anyway*. Any reason for a 'bad
message' will cause `True` return and thus deletion of the message in
*this* call to `queue_check_and_send()`.
* There is a new `reschedule` parameter to `queue_check_and_send()`. If
`True` then at the end it should re-schedule.
There is a check in `journal_entry()` for the `Docked` event, and if this
occurs it will schedule `queue_check_and_send()` with `reschedule` set to
`False` so that we don't end up with multiple parallel schedulings.
It's still possible for a docking to have coincided with a scheduled run
and thus cause double-rate sending to EDDN, but we can live with that.
* The same scheduling mechanism is used, with a much smaller delay, to
process more than one queued message per run.
Hence the `have_rescheduled` bool *in* the function to indicate if a 'fast'
reschedule has already been set. This prevents the slow one *also* being
set in this scenario. The latter will be scheduled when the fast one
found no more rows to process.
* Erroneously used 'CAPI-commoodity' when it's 'CAPI-market' (name of the
CAPI endpoint, not anything to do with EDDN schema names, and '-commodity'
would also be wrong for that).
* Set `header` for (CAPI) `export_commodities()`.