Some divecomputer backends (ok, right now really only the Aqualung i770R
and i300C) want to know the bluetooth name of the dive computer they
connect to, because the name contains identifying information like the
serial number.
This just adds the support for that to our Qt BLE code.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If only selected dives were exported into HTML, the statistics would
nevertheless cover all dives. A counter-intuitive behavior. Fix by
adding a selected_only flag to calculate_stats_summary().
Reported-by: Jan Mulder <jlmulder@xs4all.nl>
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The statistics of the selected dives were calculated
a) into a global objects and
b) at a completely different place than where they're used.
There's no plausible reason for either. There fore render
into a caller-provided structure at the place of use.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Statistics were calculated into global variables every time the
current dive was changed.
Calculate statistics only when needed and into a structure
provided by the caller.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This flag had two distinct uses:
- signal that dives were downloaded, not imported
- use to mark imported dives
Both are not used anymore, therefore remove the flag.
The uemis downloaded misused the flag to mark deleted
dives. Instead misuse the "hidden_by_filter" flag.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
process_imported_dives() is more efficient for downloaded than for
imported (from a file) dives, because it checks only the divecomputer
of the first dive.
This condition is checked via the "downloaded" flag of the first
dive. Instead, pass an argument to process_imported_dives().
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Dive importing is now performed via a distinct table which is
merged into the main dive table. Thus, it is known which of the
dive is new and which is old. This information can now be
implicitely encoded in the parameter-position of merge_dive()
[i.e. pass old as first and new as second dive].
This makes marking of downloaded dives via a flag unnecessary.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Dives are now in all cases imported via distinct dive_tables.
Therefore the "preexisting" marker is useless. Remove.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Dives were directly imported into the global dive table and then
merged in process_imported_dives(). Make this interface more flexible,
by passing an independent dive table.
The dive table of the to-be-imported dives will be sorted and merged.
Then each dive is inserted in a one-by-one manner to into the global
dive table.
This actually introduces (at least) two functional changes:
1) If a new dive spans two old dives, it will only be merged to the
first dive. But this seems like a pathological case, which is of
dubious value anyway.
2) Dives unrelated to the import will not be merged. The old code
would happily merge dives that were not even close to the
newly imported dives. A surprising behavior.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The old surface interval calculation had fundamental issues:
1) process_all_dives(), which calculates the statistics over *all*
dives was used to get the pointer to the previous dive.
2) If two dives in the table had the same time, one of those would
have been considered the "previous" dive.
3) If the dive, for which the surface interval is calculated is
not yet in the table, no previous dive would be determined.
Fix all this by creating a get_surface_interval() function and
removing the "get previous dive" functionality of process_all_dives().
Remove the process_all_dives() call from TabDiveInformation::updateData().
Reported-by: Jan Mulder <jlmulder@xs4all.nl>
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Because some BLE operations can be very slow (device and service
discovery etc), we have some rather excessive default timeout for BLE
(currently set to 12 seconds).
But once we actually have started doing IO, that long timeout can be a
big performance problem, when the libdivecomputer backend has support
for retry and packet loss.
For that reason, libdivecomputer has a 'set_timeout()' function that
allows the divecomputer backend to say how quickly it expects the dive
computer to answer before the backend will start resending packets.
Let's just implement that for the actual IO side of BLE too. The
default timeout value remains the general BLE timeout, and this only
affects the actual IO phase, but it improves things enormously for the
case where there is packet loss at that point.
For example, on the Aqualung i770R, the timeout for packet loss ends up
now being just one second rather than the full 12 seconds of default BLE
timeout. Which gets the retry going much faster.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When we enable notifications, we actually want to make sure to wait for
that write to have completed before we start communicating with the
device, because otherwise we might lose notification events.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In commit 30fb7bf35c ("qt-ble: set up infrastructure for better
preferred service choice") I moved the service filtering from the
addService() callback into the "select_preferred_service()" function
that picks the right service for the device.
That was nice for debugging, since it meant that we showed the details
of _all_ services, but it also meant that we ended up starting service
discovery on _all_ services, whether they looked at all interesting or
not.
And that can make the BLE device discovery process quite a bit slower.
The debugging advantage is real, but honestly, service discovery can
generally be better done with specialized tools like the Nordic nRF app,
so the debugging advantage of just listing all the details of all the
services is not really worth the discovery slowdown in general.
So move the basic "filter by uuid" back to the service discovery phase,
and don't bother starting service detail discovery for the services that
we can dismiss immediately just based on the service UUID.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The error handling was incorrect for the case where we successfully
opened the libdivecomputer iostream in divecomputer_device_open(), but
the dc_device_open() call failed.
When the dc_device_open() failed, we would (correctly) not do the
dc_device_close() but we would _also_ not do the dc_iostream_close() to
close the underlying file descriptor, which is wrong.
Normally this isn't all that noticeable, partly because the common case
is that dc_device_open() succeeds if you actually do have a dive
computer connected, but also because most of the time it just leaked a
file descriptor or something like that.
However, particularly for the POSIX serial device case, libdivecomputer
does a
ioctl(device->fd, TIOCEXCL, NULL)
call to make serial opens exclusive. This is what we want - but if we
then fail at closing the serial file descriptor, we won't be able to
retry the import at all because now the next open will fail with EBUSY.
So the error handling was incorrect, and while it doesn't usually matter
all that much, it can be quite noticeable particularly when you have
transient errors.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If no dives were downloaded in do_libdivecomputer_import(), an
error message would be produced. To check for downloaded dives,
the function would access the global downloadTable instead of
the actual table the dives are imported to (at the moment the
same - but the interface allows for a different table).
Move the error-creation to the caller to avoid this situation.
An alternative option would be to check the actual table the
dives were supposed to be downloaded to. But from a program-logic
point of view "no dives" does not seem like an error condition.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The Uemis downloader determines the dive-number to be downloaded
by either checking the download-table [interrupted connection] or
the global dive table [fresh download].
The downloadTable is passed in the device data structure, but
in the function to determine the latest dive, the global
downloadTable is accessed directly [thus supposing that this
table was passed in device data].
Instead, use the table from device data to avoid funny surprises
should we change to a non-global download table.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
On DAN-file import after each dive except the first, the dive-list
was processed. This seem bogus and inefficient. An artefact from
old code? In any case, remove.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Since we now keep track of up to 4 DCs we don't want to display the last used one
but rather the one that is connected.
Signed-off-by: Joakim Bygdell <j.bygdell@gmail.com>
Currently, we can only delete dives that are indexed in the main
dive table. In the future, we will have to delete dives outside
of this table (e.g. for undo). Therefore, split out the free_dive()
function from delete_single_dive(), which takes an index into
the main dive table.
In the process, adopt the dive freeing-code from clear_dive(),
which frees more data than the code in delete_single_dive().
This potentially fixes a memory-leak.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This reverts commit 1c4a859c8d,
where the override modifiers were removed owing to the noisy
"inconsistent override modifiers" which is default-on in clang.
This warning was disabled in 77577f717f,
so we can reinstate the overrides.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In d815e0c947 a dive_table pointer
was added to the parsing functions to allow parsing into tables
other than the global dive table. This will be necessary for undo of
import and implementation a cleaner interface. A few cases, notably
CSV and proprietary formats were forgotten.
Implement parsing into arbitrary tables also for these cases.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
A few of these prototypes were already in import-csv.h.
Put them in an 'extern "C" { ... }' block.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The existing BLE dive computers treat BLE as the packetized protocol it
is, and read whole packets at a time.
However, the Mares BlueLink backend treats it as just a basic "serial
over BLE" transport, and for historical reasons reads the reply packets
in smaller chunks.
This allows that kind of IO behavior, where if the divecomputer backend
reads just a part of a packet, we'll split the packet, return the part
the user asked for, and push back the leftover packet onto the received
packet queue.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We traditionally only allow samples to have a time format of 'mm:ss', so
if you have a dive over an hour, you would just have a minutes field
larger than 60 minutes.
But Matthew Critchley is trying to import some dives from his VMS
Redbare CCR, and the sample timestamp format he has is of the type
'hh:mm:ss'.
That could be fixed by a xslt translation, but there's no real reason
why we couldn't just support that format too.
Reported-by: Matthew Critchley <matthew.s.critchley@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is perhaps overly verbose, but the timing details helped figure out
some EON Core download issues, and it's nice to see when things actually
happen.
It's also good to see when the data actually enters our queues, and when
we read and write the packets. That might help debug the issues Fabio
is seeing with the Mares Bluelink.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We used to just find all services and connect the characteristics change
signal etc to them all, but we really only care about the actual
preferred service that we'll be using.
So move the qt ble signal connection to after we've selected the
preferred service that we will actually be enabling notifications on and
do the writes to.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
XMP is a media-metadata standard based on XML which may be used
across a variety of media formats. Some video-processing software
writes XMP data without updating the native metadata fields.
Therefore, we should aim at reading XMP metadata and give priority
of XMP data over native fields.
Pros:
- Support for *all* common media formats.
Cons:
- XML (complex, verbose, chaotic).
- Does not even come close to fulfilling its promise of being
well defined (see below).
Implement a simple XMP-parser using libxml2. Connect the XMP-parser to
the existing Quicktime/MP4 parser.
First problem encountered: According to the spec, XMP data supposed
to be put in the 'XMP_' atom. But for example exiftools instead
writes an 'uuid' atom with a special 16-byte uid. Implement both,
more options will probably follow.
Second problem: two versions of recording the creation date were found
1) The content of a <exif:DateTimeOriginal> tag.
2) The xmp::CreateDate attribute of a <rdf:Description> tag.
Here too, more versions are expected to surface and will have
to be supported in due course (with an obvious priority problem).
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We use that in the mobile app to scale the whole app, as all sizes there
are relative to the default font.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We used to just blindly pick "first" and "last" characteristic from the
preferred service, and that was stupid but happened to work for the dive
computers we supported. Note that for some of them, "first" and "last"
was actually the *same* characteristic, since it could be a single one
that supported both.
However, this first/last hack definitely doesn't work for the Mares
BlueLink BLE dongle, and it's really all pretty wrong anyway.
So re-organize the code to actually look at the properties of the
characteristics. I don't have a BlueLink to test with, but my EON Core
and Shearwater Perdix AI are still happy with this, and the code
conceptually makes a lot more sense.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
And remove some includes and defines that are not used any more after
removal of the GPS webservice code.
Signed-off-by: Jan Mulder <jlmulder@xs4all.nl>
Fix multiple run-time errors in connect call introduced in 504e912512.
1) Set the proper signature of the signal. 2) make the used slot
a real slot (so move it to the proper section in the header) and
3) set the proper signature for the slot.
Highly unlikely that normal users notice the runtime errors and
possibly unwantend behavior, as this all deals with the subtile GPS
service update threshold.
Signed-off-by: Jan Mulder <jlmulder@xs4all.nl>
We used to just pick the first non-standard service we found (with a
special case for the Heinrichs Weikamp dive computers that have an
actual registered standard service).
We then waited for that service to finish discovery, and started using
it.
This changes the logic to wait for _all_ services to finish discovery,
and then after that we pick the one we like best. Right now the rule
for picking a preferred service is the same one we had before, but the
difference is that we now have the full discovery data, so we *could* do
something better.
Plus this makes our debug messages a lot more legible, when we don't
have the mix of overlapping service discovery with the actual IO we do
to the preferred service.
NOTE! This doesn't much matter for most of the dive computers that we
currently support BLE for. They don't tend to have a lot of odd
services.
But at least both the Mares BlueLink and the Garmin Descent both have
multiple services and it's not obvious which one to use, and this will
make it not only easier to debug those, it will make it easier to pick
the right preferred service descriptor to use.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is not only much clearer (and smaller code), but it also lowers the
latency for the waiting, since we don't always wait for the full 100ms.
Get rid of the now unused "waitfor()" function that just unconditionally
waited for 100ms.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
process_dives() is used to post-process the dive table after loading
or importing. The first parameter states whether this was after
load or import.
Especially in the light of undo, load and import are fundamentally
different things. Notably, that latter should be undo-able, whereas
the former is not. Therefore, as a first step to make import undo-able,
split the function in two versions and remove the first parameter.
It turns out the the load-version is very light. It only sets the
DC nicknames and sorts the dive-table. There seems to be no reason
to merge dives.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We only store the address part of the connection name, so don't try to find an
exact match, try to find the sub-string.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This does feel clumsy and complicated. This is a lot of special case
handling and a lot of boilerplate for something that really should be
quite simple.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In visit_on_node() in core/parse-xml.c the name is extracted into
a static buffer. There seems to be no need for this being static,
as the name is only passed to the entry() function which (hopefully)
does not store a reference to the name anywhere.
If it does, this would need a *big* *fat* comment.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The existing code creates a deterministic ID (not exactly "unique") in order to
help us avoid merge conflicts in git-storage mode. But as a side effect, if we
re-download the same dive twice from a dive computer that supports GPS (right
now only the Garmin Descent Mk1) we are guaranteed to create the same dive site
uuid when we do this. So when we download a dive - whether we will actually
*use* that dive later or not - we will be filling in the dive site information
with the data we got from the dive computer.
... and in the process we will be overwriting any data that was filled in
manually. The name of the dive site, but also possibly even the GPS of the dive
site (maybe the user decided to edit that using the map, because while the
automatically downloaded GPS data was "correct", maybe the user wanted to
change it to be the actual under-water location using the satellite data,
rather than the place where you started the dive or where you surfaced).
In order to avoid this collision, this patch just makes the libdivecomputer
download not use the dive time, but "time of download" for the dive site time,
and thus effectively generate a new uuid for every download.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
On import of dive media, the timestamp is read from the
metadata to check if the image belongs to the selected dives.
The pictures are then listed in a dialog.
Currently, the metadata is read twice if images are outside
of a dive: once in picture_check_valid() and if it turns
out that the picture is not valid again in picture_get_time()
to display the proper timestamp.
Even though metadata-extraction is reasonably fast, this is
a bit of an embarrassment.
Instead, read the timestamps only once in the constructor of
the dialog and from then on only used these timestamps. Keep
the timestamps in a QVector. Rename the picture_check_valid()
function to picture_check_valid_time() and pass a timestamp
instead of a filename.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The merge_one_sample() function adds a sample to the destination
dive if dives are merged. For long periods between samples at surface
depths, it adds a surface interval.
To decrease the number of global objects, make the sample structure
non-static. Of course, initialization of an on-stack structure is
slower. Therefore move it into the corresponding if. Thus, the
structure will be initialized only once per surface-interval.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>