This way individual pieces can be turned on and off.
The commit also adds code to read from a disk image (instead of the SDA)
without all the long timeouts.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Never make trivial changes without testing them. This was missung a '!'
before the strcmp - so the wrong code got executed when trying to get the
DeviceId and everything afterwards failed without a valid DeviceId.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This was necessary for the Uemis downloader when we used the SDA file
format as intermediary data format and imported that as XML buffer.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The downloader has been integrated into Subsurface for a while and with
the recent change to no longer have it create the old style SDA files as
intermediary format there is no need anymore to support that format in the
XML parser.
This deletes almost 300 lines of code. Yay!
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The initial downloader reused the XML parsing of SDA files that was
implemented early in order to support the information extracted from the
SDA with the java applet. But creating this intermediary XML file and
handing it off to the XML import function always seemed like an ugly way
to do things. This became even more obvious when adding more features to
the Uemis downloader.
This commit completely changes the downloader to instead create dives and
record them directly.
This also adds support for divespots (which are stored in a seperate
database that needs to be queried after the divelog and dive entries have
been combined - the Uemis firmware clearly was written by monkeys on
crack - oh wait: I'm trusting these same people to get the deco right?).
This commit leaves the SDA import capability in the XML parser intact.
I'll remove that later. Because of this it actually adds a few lines of
code, but the overall change will be a substantial code deletion.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Actually, it's even better than that. Thanks to the new divecomputer
datastructure we can now simply look up in the dive_table which dives have
been downloaded from this specific Uemis SDA.
This patch removes the old gconf based code - which leads to one
unfortunate problem: the first time a Uemis SDA owner runs this version of
Subsurface against their data file ALL dives will be downloaded again
(which may not be a bad thing as we have improved a few other details of
Uemis support so now they get their deco information, surface pressure and
other data that we have started to support since 2.1). Still, this is not
ideal. But I didn't want to keep the legacy code around since this new
solution is so much cleaner.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I was a little too eager to add the deco feature to Subsurface. Jef and I
went back and forth a few more times and the definition of those events
changed. I guess I shouldn't have commited that code until the
corresponding libdivecomputer code had been pushed.
This commit now brings us in sync with the current master of
libdivecomputer (but should compile with 0.2 as well - only deco events
won't work then).
One issue that I see is that deco / ndl aren't really a good fit for the
event model. I actually disabled the drawing of the little yellow
triangles for ndl events as for example on the Uemis those events are
created whenever the remaining non stop time changes - and that can be
every few seconds.
The correct solution may be to treat this as a function of the samples,
but for now this works and is tested with both OSTC and Uemis SDA.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Apply the decimal time conversion hack for JDiveLog import if there are
less than 2 digits in the decimal part (and value is less than 60).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Use the decimal time format fallback also for one digit numbers as
Linus suggested. Thus 1.1 min would result in 1 min 6 sec.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This actually triggers for one of our insane test dives (test15): it has
no samples, so we created a fake dive computer entry with the a fake
profile in it, but we didn't copy the events over.
Having a dive with no samples, yet having events from the dive computer,
sounds pretty bogus. But that test-case did show that when that bogus
situation happens, we had two independent buglets: (a) we didn't insert
the entries in the fake dive computer entry we used and (b) we would
then mix up the events of the fake dive computer entry with the first
dive computer of a dive.
Fix this, just to make test15 happy again. And eventually, when we
actually plot the information for multiple dive computers, fixing case
(b) would become necessary even for real dives.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This removes the tripflag name array, since it's not actually useful.
The only information we ever save in the XML file is whether a dive is
explicitly not supposed to ever be grouped with a trip ("NOTRIP"), and
everything else is implicit.
I'm going to simplify the trip flags further (possibly removing it
entirely - like I did for dive trips already), and don't like having to
maintain the tripflag_names[] array logic.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Both dives and dive trips have the same 'tripflag' thing, but they are
used very differently. In particular, for dive trips, the only case
that has any meaning is the TF_AUTOGEN case, so instead of having that
trip flag, replace it with a bitfield that says whether the trip was
auto-generated or not.
And make the one-bit bitfields explicitly unsigned. Signed bitfields
are almost always a mistake, and can be confusing.
Also remove a few now stale macros that are no longer needed now that we
don't do the GList thing for dive list handling, and our autogen logic
has been simplified.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is a hack to convert time stored in decimal notation to proper
seconds. When using metric units the default way of JDiveLog to store
seconds is to have the amount of seconds after decimal point (1.20 is 1
minute 20 seconds). In some odd case it is reportedly possible that the
seconds are actually 100 based, thus we need to convert that to seconds
(1.33333 will become 1 minute 20 seconds).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This makes the dive trip auto-generation a separate pass from the
showing of the dive trips, which makes things much more understandable.
It simplifies the code a lot too, because it's much more natural to
generate the automatic trip data by walking the dives from oldest to
newest (while the tree model wants to walk the other way).
It gets rid of the most annoying part of using the gtk tree model for
dive trip management, but some still remains.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It had become a write-only field (apart from some now useless debugging)
when simplifying the remove_autogen_trips() function.
So remove it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I'm trying to remove (or at least simplify) the gtk tree model usage for
our trip handling, but I'm doing it in small chunks. The goal is to
just do all our trip handling logic explicitly using our own data
structures, and use the gtk tree model purely for showing the end
result.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We already kept a count of dives per trip in order to figure out when
there are no more dives left and the trip needs to be freed. Now we
explicitly keep track of the list of dives associated with the trip too,
which simplifies the "find the time of the trip" logic.
We may want to sort it in time, but for now this is mainly about trying
to keep track of the divetrip relationships explicitly. I want to move
away from the whole "use the gtk tree model to keep track of things"
approach.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When picking the "better" trip, we stupidly looked not at the trip
location, but at the _dive_ location.
Which obviously didn't actually pick the "better" trip information at
all, since it never actually looked at the trip itself.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit bb6b6b49a6d4 "Start merging dives by keeping the dive computer data
from both dives" created a compile time warning. This simply adds an #if /
Yes, this might accelearate bit rod in the code, but I just dislike the
warning message when compiling Subsurface.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have dive computer device ID fields etc, we can do a much
better job of merging the dive computer data.
The rule is
- if we actually merge two disjoint dives (ie extended surface interval
causing the dive computer to think the dive ended and turning two of
those dives into one), find the *matching* dive computer from the
other dive to combine with.
- if we are merging dives at the same time, discard old-style data with
no dive computer info (ie act like a re-download)
- if we have new-style dive computers with identifiers, take them all.
which seems to work fairly well.
There's more tweaking to be done, but I think this is getting to the
point where it largely works.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It's annoying to see water salinity data in the XML that isn't relevant,
and adding the default value just because the dive got downloaded from
libdivecomputer is definitely wrong.
We should set the water salinity explicitly only if we have it
explicitly set on the dive computer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This simplifies the vendor/product fields into just a single "model"
string for the dive computer, since we can't really validly ever use it
any other way anyway.
Also, add 'deviceid' and 'diveid' fields: they are just 32-bit hex
values that are unique for that particular dive computer model. For
libdivecomputer, they are basically the first word of the SHA1 of the
data that libdivecomputer gives us.
(Trying to expose it in some other way is insane - different dive
computers use different models for the ID, so don't try to do some kind
of serial number or something like that)
For the Uemis Zurich, which doesn't use the libdivecomputer import, we
currently only set the model name. The computer does have some kind of
device ID string, and we could/should just do the same "SHA1 over the
ID" to give it a unique ID, but the pseudo-xml parsing confuses me, so
I'll let Dirk fix that up.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Also, note that we do *not* do the "find_sample_offset()" any more when
we merge two dives that happen at the same time - since we just keep
both sets of dive computer data around.
But we keep the function to find the best offset around, because we may
well want to use it later when *showing* the dive, and trying to match
up the different sample data from the multiple dive computers associated
with the dive.
Because of that, this causes warnings about the now unused function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have more complete dive computer information, we can use
that to match the dives we download, and stop with the hacky "Would we
merge this" check.
For XML files without the explicit dive computer information, go back to
checking the exact dive time.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This also knows how to save and restore multiple dive computers in the
XML data, but there's no way to actually *create* that kind of
information yet (nor do we display it). Tested by creating fake XML
files with multiple dive computers by hand so far.
The dive computer information right now contains (apart from the sample
and event data that we've always had):
- the vendor and product name of the dive computer
- the date of the dive according to the dive computer (so if you change
the dive date manually, the dive computer date stays around)
Note that if the dive computer date matches the dive date, we won't
bother saving the redundant information in the XML file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When we have a preferred dive computer that overrides old information
when merging two dives, we just copy the dive computer data over.
However, we need to clear the source of the dive computer data so that
we then don't free the sample data when that old source of the newly
merged dive gets free'd.
This fixes a memory scribble (and likely SIGSEGV) for the "prefer
downloaded" case.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If dive computer does not record the sample interval, but records time
stamps on the samples, we use those.
A couple of corner cases fixed that were noticed in new log samples.
Also fixes when importing dives logged in imperial units.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
For now we only have one fixed divecomputer associated with each dive,
so this doesn't really change any current semantics. But it will make
it easier for us to associate a dive with multiple dive computers.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We used to avoid some extra allocations by just allocating the dive
samples as part of the 'struct dive' allocation itself, but that ends up
complicating things, and will make it impossible to have multiple
different sets of samples (for multiple dive computers).
So stop doing it. Just allocate the dive samples array separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When this was first implemented the assumption was that a downloaded dive
that is to be merged with an existing dive would have the same time stamp.
But as Linus pointed out even back then, this does fail if a dive has been
merged with a download from a different dive computer before (think:
download from computer a, then download same dive from b, then improve
something in the parsing from computer a and try to redownload; the time
stamp could have changed).
This commit also fixes a silly omission in the merge_dives() function
(which ended up ALWAYS prefering the downloaded dive) and finally
implements the necessary changes to mark dives downloaded from a Uemis SDA
as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Prior to this commit, gtk often decided to collapse the trip with the
selected dive after the user imported or downloaded additional dives.
Since Subsurface tracks dives as being selected even after gtk collapses a
trip (which clears all selection state as far as gtk is concerned) this
could lead to the strange situation that the user could click on a new
dive to select it without unselecting the already selected dive - and
suddenly edit or delete did things that were entirely unwanted.
With this change we explicitly save and then restore the tree state around
import and download operations. This ensures that the same dive(s) stay
selected and trips stay expanded and therefore avoids the issues described
here.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The code pretended to support this for libdivecomputer based downloads,
but it had never been hooked up when the native Uemis downloader was
implemented. When I finally decided to close that feature gap I realized
that the original code was, shall we say, "aspirational" or "completely
bogus" and therefore never worked.
So instead of just hooking up the code for the Uemis downloader I instead
implemented this correctly for the first time for both libdivecomputer and
the native Uemis downloader.
In order not to have to mess with multithreaded Gtk development I simply
opted for a helper function that fires on a 100ms timeout and have it end
the dialog without a response. This way we can run the dialog while
waiting for the download to finish, still update the progress bar and
respond in a useful manner to the user clicking cancel.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
- Create a grid for each dive printed.
- We change justify "center" to "left" wich contributes to diferentiate each component of the array.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The divecomputer download code will stop at a matching dive (unless
you check the "Download all dives" option when downloading).
However, the matching dive is an *exact* match, which works well when
you have a single dive computer, but is a big pain when you have
multiple. What happens is that the date of the dive will be determined
by whatever dive computer you used first, and then downloading from
other dive computers will not match exactly, but will merge (if the
computers are within a minute of each other).
And that will continue to happen every time you try to download from
that other dive computer.
So use the same logic as for the automatic dive merging: consider
"within one minute" to be a matching dive. So don't download dives
that will be merged - unless the user asks for it.
We do want to have some way of saying "force download of all dives
from today" or something like that, I suspect. Because while I don't
want to re-download *every* dive, I might want to force-merge the last
<N> dives.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
And again buffer_insert contained the blatant bug.
The code wasn't copying the trailing '\0' when extending the string, which
usually didn't end up blowing up the code (and therefore kept the bug
hidden until now) because of the way realloc reused memory - we just had
trailing garbage strings. But sometimes we weren't so lucky and the strlen
in a subsequent call of buffer_insert would run past the end of the
allocated buffer.
Oops.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We are accessing offset 24 in an array of length 24. To make things easier
for the base64 conversion we just treat this as an off by three error and
instead create an array large enough for 27 elements and convert a
sufficient number of base64 chars to initialize all of them.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Simply call "make LIBDCDEVEL=1" and the libdivecomputer includefiles are
expected in ../libdivecomputer/include and the actual library is linked
from ../libdivecomputer/src/.libs
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This may seem like an esoteric corner case, but it will actually happen
reliably when re-downloading dives from the Uemis SDA:
If the user selects "Force downloada of all dives" in the "Download from
divecomputer" dialog and if the SDA runs out of space and needs to be
unmounted and remounted, then for the 'Retry' the 'force' flag should be
cleared - or the user will once again start from the first dive which
almost certainly is not what they expect.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we ran out of space on the Uemis SDA during download and the user
unmounted, unplugged and replugged the SDA, we need to take care to
correctly reset the file number we use for finding the correct ANS file.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Start every step with much longer timeouts (until we get the first
response back), but then use shorter timeouts once we have started
receiving data.
This uses up fewer of the ANS files and allows us to get more dives
downloaded before the SDA has to be unplugged to reset communications,
yet at the same time it still improves the overall download time.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This mirrors commit 59929fdb5d2a "Mark divelist changed as we download
dives from a dive computer" which only fixed things for the
libdivecomputer case.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The GPS parsing and saving was using sscanf and sprintf respecively, and
since it is using floating point values (boo!) that affects both of
them. In a C/US locale, we use a period for decimal values, while most
European locales use a comma.
We really should probably just fix things to use integer values (degrees
and nanodegrees?) but this is the simplest fix/workaround for the issue.
Probably nobody ever really noticed until I tested the Swedish locale
for grins, since we don't have a good way to actually set the GPS
coordinates yet. I've got a few dives with GPS information that I
entered manually.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Most of the dive computers I have access to don't do the whole surface
event thing at the beginning or the end of the dive, so when you merge
two consecutive dives, you got this odd merged dive where the diver
spent the time in between at a depth of 1.2m or so (whatever the dive
computer "I'm now under water" depth limit happens to be).
Don't do that. Add surface events at the end of the first dive to be
merged, and the beginning of the second one, so that the time in between
dives is properly marked as being at the surface.
The logic for "time in between dives" is a bit iffy - it's "more than 60
seconds with no samples". If somebody has dive computers with samples
more than 60 seconds apart, this will break and we may have to revisit
the logic. But dang, that's some seriously broken sample rate.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit 38c79d149d ("Simplify and clean up dive trip management")
simplified the code a bit *too* much, and removed the check for
"dive->selected".
As a result, trying to delete a dive resulted in *all* dives being
deleted.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
"record_dive()" won't do that, since otherwise we'd mark the dive list
changed when we load it from an XML file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>