* An aborted attempt was made to use a thread worker, but:
1. sqlite3 doesn't allow cross-thread use of the same sqlite3 connection.
2. Having an on-going query on one cursor, e.g. gathering all the
outstanding message `id`, whilst trying to DELETE a row hits a
"database is locked" error.
* So, back to tk `after()`. `send_message_by_id()` has been audited to ensure
its boolean return is accurate. So there shouldn't be any way in which to
get hung up on a single message *other than if the EDDN Gateway is having
issues, and thus it should be retried anyway*. Any reason for a 'bad
message' will cause `True` return and thus deletion of the message in
*this* call to `queue_check_and_send()`.
* There is a new `reschedule` parameter to `queue_check_and_send()`. If
`True` then at the end it should re-schedule.
There is a check in `journal_entry()` for the `Docked` event, and if this
occurs it will schedule `queue_check_and_send()` with `reschedule` set to
`False` so that we don't end up with multiple parallel schedulings.
It's still possible for a docking to have coincided with a scheduled run
and thus cause double-rate sending to EDDN, but we can live with that.
* The same scheduling mechanism is used, with a much smaller delay, to
process more than one queued message per run.
Hence the `have_rescheduled` bool *in* the function to indicate if a 'fast'
reschedule has already been set. This prevents the slow one *also* being
set in this scenario. The latter will be scheduled when the fast one
found no more rows to process.
* Erroneously used 'CAPI-commoodity' when it's 'CAPI-market' (name of the
CAPI endpoint, not anything to do with EDDN schema names, and '-commodity'
would also be wrong for that).
* Set `header` for (CAPI) `export_commodities()`.
* The eddn parts of the OUT_EDDN_DO_NOT_DELAY -> OUT_EDDN_DELAY change. This
includes the 'sense' of it being inverted from what it was.
* EDDN.REPLAY_DELAY is now a float, as it's used with `time.sleep()`. *This*
is the 400ms value for inter-message cooldown.
* EDDN.REPLAY_PERIOD is still an int, used with tk `after()`. This is how
often we attempt the queue.
* EDDN.session is no longer a thing, move that part of EDDN.close() to
EDDNSender.close().
* EDDN queue DB has `id`, not `message_id`.
* Now *looping* in the queue sender, not only sending the oldest message.
* Now that we're not trying to do "did we just/are we know docked?" in this
code it turns out that both CAPI and Journal messages can use the same
function for this.
* And as it's no longer journal-specific `EDDN.export_journal_entry()` has
been renamed to `EDDN.send_message()`.
This whole branch now needs to actually implement sending queued messages
when docked, and periodically in the case of initial failures.
* `EDDN.sendreplay()` is no longer used.
* In `prefsvarschanged()` there was a reference to `eddn.replayfile`, so as
to grey out the "Delay sending..." option if the file wasn't available.
So that's moot and also removed, but also...
* Comment the purpose of that line in `prefsvarchanged()` because it's not
immediately obvious.
In some cases the check might already have been done, but if not then this
is the last easy place to perform it.
NB: Unlike the old code this does *not* attempt to check "are we docked
*now* ?" for triggering sending of previously queue messages. That's
going to need a thread worker.
It's easier to check "should we send this message at all?" earlier. Currently
all of the following ('station data') do so:
* CAPI commodity, outfitting (also fcmaterials) and shipyard.
* Journal commodity, fcmaterials, outfitting, and shipyard.
This flag controls whether commodity, outfitting or shipyard schema messages
are sent. Thus 'MKT' ('market') is misleading. Rename it so the intent when
used is clear.
* This was perhaps originally meant for what the UI option says, i.e. "send
system and scan data", but is actually being used for anything that is
**NOT** 'station data' (even though *that* option has 'MKT' it includes
outfitting and shipyard as well).
So, just name this more sanely such that code using it is more obvious as
to the actual intent.
The sense of this `output` flag has been inverted (always?) for a long time.
1. I have the option "Delay sending until docked" showing as *off* in the UI.
2. My config.output value is `100000000001`.
3. The value of this flag is `4096`, which means 12th bit (starting from 1, not
zero).
4. So I have the bit set, but the option visibly off.
So, rename this both to be more pertinent to its use *and* to be correct as to
what `True` for it means.
* It makes more sense for this new class to be concerned with all the 'send it'
functionality, not just 'replay', so rename it.
* Athough we're trying to get the schema right *first* time, let's plan ahead
and version the filename in case of needing to migrations in the future.
We'll definitely want to query against `cmdr`, and possibly `created`.
We shouldn't need to against other fields, they'll just be checked during
processing of an already selected message.
* sqlite3 open, and creation of table.
* Change `load_journal_replay()` to `load_journal_replay_file()` and change
the semantics to just return the `list[str]` loaded from it. It also now
catches no exceptions.
* Remove the "lock the journal cache" on init as it's not necessary.
There's still a lot more changes to come on this.
The EDDN schema is still a work in progress, unsure if we'll use the same
one for both Journal and CAPI sourced data, or separate schemas.
But, so far, this works.
This can simply occur if the *first* load of `NavRoute.json` soft-fails,
meaning the plugin is receiving the bare Journal-file event, which has no
`Route` array.
This has been missed, based on Journal examples, and has been causing issues
at least for `ApproachSettlement`.
So, `filter_localised()` now used in:
`export_journal_navroute()`
`export_journal_approachsettlement()`
`export_journal_fssallbodiesfound()`
* Need to remove `event`, `horizons` and `odyssey` per signal.
* It's lower case `horizons` and `odyssey` in a(n augmented) journal event.
* It's `event`, not `entry` that `export_journal_entry()` will look for.
* Move call to `export_journal_fsssignaldiscovered` to top-level of event
processing. This ensures we'd still have the *previous* system tracked
when running under Odyssey.
Also, we can't return any result from this, as we'd still need to process
things like `Location` otherwise.
* Use `logger.trace_if("plugin.eddn.fsssignaldiscovered", ...)`
* Perform all sanity checks, elisions and augmentations in the "not
FSSSignalDiscovered event" sending code.