Apply the decimal time conversion hack for JDiveLog import if there are
less than 2 digits in the decimal part (and value is less than 60).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Use the decimal time format fallback also for one digit numbers as
Linus suggested. Thus 1.1 min would result in 1 min 6 sec.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This actually triggers for one of our insane test dives (test15): it has
no samples, so we created a fake dive computer entry with the a fake
profile in it, but we didn't copy the events over.
Having a dive with no samples, yet having events from the dive computer,
sounds pretty bogus. But that test-case did show that when that bogus
situation happens, we had two independent buglets: (a) we didn't insert
the entries in the fake dive computer entry we used and (b) we would
then mix up the events of the fake dive computer entry with the first
dive computer of a dive.
Fix this, just to make test15 happy again. And eventually, when we
actually plot the information for multiple dive computers, fixing case
(b) would become necessary even for real dives.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This removes the tripflag name array, since it's not actually useful.
The only information we ever save in the XML file is whether a dive is
explicitly not supposed to ever be grouped with a trip ("NOTRIP"), and
everything else is implicit.
I'm going to simplify the trip flags further (possibly removing it
entirely - like I did for dive trips already), and don't like having to
maintain the tripflag_names[] array logic.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Both dives and dive trips have the same 'tripflag' thing, but they are
used very differently. In particular, for dive trips, the only case
that has any meaning is the TF_AUTOGEN case, so instead of having that
trip flag, replace it with a bitfield that says whether the trip was
auto-generated or not.
And make the one-bit bitfields explicitly unsigned. Signed bitfields
are almost always a mistake, and can be confusing.
Also remove a few now stale macros that are no longer needed now that we
don't do the GList thing for dive list handling, and our autogen logic
has been simplified.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is a hack to convert time stored in decimal notation to proper
seconds. When using metric units the default way of JDiveLog to store
seconds is to have the amount of seconds after decimal point (1.20 is 1
minute 20 seconds). In some odd case it is reportedly possible that the
seconds are actually 100 based, thus we need to convert that to seconds
(1.33333 will become 1 minute 20 seconds).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This makes the dive trip auto-generation a separate pass from the
showing of the dive trips, which makes things much more understandable.
It simplifies the code a lot too, because it's much more natural to
generate the automatic trip data by walking the dives from oldest to
newest (while the tree model wants to walk the other way).
It gets rid of the most annoying part of using the gtk tree model for
dive trip management, but some still remains.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It had become a write-only field (apart from some now useless debugging)
when simplifying the remove_autogen_trips() function.
So remove it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I'm trying to remove (or at least simplify) the gtk tree model usage for
our trip handling, but I'm doing it in small chunks. The goal is to
just do all our trip handling logic explicitly using our own data
structures, and use the gtk tree model purely for showing the end
result.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We already kept a count of dives per trip in order to figure out when
there are no more dives left and the trip needs to be freed. Now we
explicitly keep track of the list of dives associated with the trip too,
which simplifies the "find the time of the trip" logic.
We may want to sort it in time, but for now this is mainly about trying
to keep track of the divetrip relationships explicitly. I want to move
away from the whole "use the gtk tree model to keep track of things"
approach.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When picking the "better" trip, we stupidly looked not at the trip
location, but at the _dive_ location.
Which obviously didn't actually pick the "better" trip information at
all, since it never actually looked at the trip itself.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit bb6b6b49a6d4 "Start merging dives by keeping the dive computer data
from both dives" created a compile time warning. This simply adds an #if /
Yes, this might accelearate bit rod in the code, but I just dislike the
warning message when compiling Subsurface.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have dive computer device ID fields etc, we can do a much
better job of merging the dive computer data.
The rule is
- if we actually merge two disjoint dives (ie extended surface interval
causing the dive computer to think the dive ended and turning two of
those dives into one), find the *matching* dive computer from the
other dive to combine with.
- if we are merging dives at the same time, discard old-style data with
no dive computer info (ie act like a re-download)
- if we have new-style dive computers with identifiers, take them all.
which seems to work fairly well.
There's more tweaking to be done, but I think this is getting to the
point where it largely works.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It's annoying to see water salinity data in the XML that isn't relevant,
and adding the default value just because the dive got downloaded from
libdivecomputer is definitely wrong.
We should set the water salinity explicitly only if we have it
explicitly set on the dive computer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This simplifies the vendor/product fields into just a single "model"
string for the dive computer, since we can't really validly ever use it
any other way anyway.
Also, add 'deviceid' and 'diveid' fields: they are just 32-bit hex
values that are unique for that particular dive computer model. For
libdivecomputer, they are basically the first word of the SHA1 of the
data that libdivecomputer gives us.
(Trying to expose it in some other way is insane - different dive
computers use different models for the ID, so don't try to do some kind
of serial number or something like that)
For the Uemis Zurich, which doesn't use the libdivecomputer import, we
currently only set the model name. The computer does have some kind of
device ID string, and we could/should just do the same "SHA1 over the
ID" to give it a unique ID, but the pseudo-xml parsing confuses me, so
I'll let Dirk fix that up.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Also, note that we do *not* do the "find_sample_offset()" any more when
we merge two dives that happen at the same time - since we just keep
both sets of dive computer data around.
But we keep the function to find the best offset around, because we may
well want to use it later when *showing* the dive, and trying to match
up the different sample data from the multiple dive computers associated
with the dive.
Because of that, this causes warnings about the now unused function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have more complete dive computer information, we can use
that to match the dives we download, and stop with the hacky "Would we
merge this" check.
For XML files without the explicit dive computer information, go back to
checking the exact dive time.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This also knows how to save and restore multiple dive computers in the
XML data, but there's no way to actually *create* that kind of
information yet (nor do we display it). Tested by creating fake XML
files with multiple dive computers by hand so far.
The dive computer information right now contains (apart from the sample
and event data that we've always had):
- the vendor and product name of the dive computer
- the date of the dive according to the dive computer (so if you change
the dive date manually, the dive computer date stays around)
Note that if the dive computer date matches the dive date, we won't
bother saving the redundant information in the XML file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When we have a preferred dive computer that overrides old information
when merging two dives, we just copy the dive computer data over.
However, we need to clear the source of the dive computer data so that
we then don't free the sample data when that old source of the newly
merged dive gets free'd.
This fixes a memory scribble (and likely SIGSEGV) for the "prefer
downloaded" case.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If dive computer does not record the sample interval, but records time
stamps on the samples, we use those.
A couple of corner cases fixed that were noticed in new log samples.
Also fixes when importing dives logged in imperial units.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
For now we only have one fixed divecomputer associated with each dive,
so this doesn't really change any current semantics. But it will make
it easier for us to associate a dive with multiple dive computers.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We used to avoid some extra allocations by just allocating the dive
samples as part of the 'struct dive' allocation itself, but that ends up
complicating things, and will make it impossible to have multiple
different sets of samples (for multiple dive computers).
So stop doing it. Just allocate the dive samples array separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When this was first implemented the assumption was that a downloaded dive
that is to be merged with an existing dive would have the same time stamp.
But as Linus pointed out even back then, this does fail if a dive has been
merged with a download from a different dive computer before (think:
download from computer a, then download same dive from b, then improve
something in the parsing from computer a and try to redownload; the time
stamp could have changed).
This commit also fixes a silly omission in the merge_dives() function
(which ended up ALWAYS prefering the downloaded dive) and finally
implements the necessary changes to mark dives downloaded from a Uemis SDA
as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Prior to this commit, gtk often decided to collapse the trip with the
selected dive after the user imported or downloaded additional dives.
Since Subsurface tracks dives as being selected even after gtk collapses a
trip (which clears all selection state as far as gtk is concerned) this
could lead to the strange situation that the user could click on a new
dive to select it without unselecting the already selected dive - and
suddenly edit or delete did things that were entirely unwanted.
With this change we explicitly save and then restore the tree state around
import and download operations. This ensures that the same dive(s) stay
selected and trips stay expanded and therefore avoids the issues described
here.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The code pretended to support this for libdivecomputer based downloads,
but it had never been hooked up when the native Uemis downloader was
implemented. When I finally decided to close that feature gap I realized
that the original code was, shall we say, "aspirational" or "completely
bogus" and therefore never worked.
So instead of just hooking up the code for the Uemis downloader I instead
implemented this correctly for the first time for both libdivecomputer and
the native Uemis downloader.
In order not to have to mess with multithreaded Gtk development I simply
opted for a helper function that fires on a 100ms timeout and have it end
the dialog without a response. This way we can run the dialog while
waiting for the download to finish, still update the progress bar and
respond in a useful manner to the user clicking cancel.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
- Create a grid for each dive printed.
- We change justify "center" to "left" wich contributes to diferentiate each component of the array.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The divecomputer download code will stop at a matching dive (unless
you check the "Download all dives" option when downloading).
However, the matching dive is an *exact* match, which works well when
you have a single dive computer, but is a big pain when you have
multiple. What happens is that the date of the dive will be determined
by whatever dive computer you used first, and then downloading from
other dive computers will not match exactly, but will merge (if the
computers are within a minute of each other).
And that will continue to happen every time you try to download from
that other dive computer.
So use the same logic as for the automatic dive merging: consider
"within one minute" to be a matching dive. So don't download dives
that will be merged - unless the user asks for it.
We do want to have some way of saying "force download of all dives
from today" or something like that, I suspect. Because while I don't
want to re-download *every* dive, I might want to force-merge the last
<N> dives.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
And again buffer_insert contained the blatant bug.
The code wasn't copying the trailing '\0' when extending the string, which
usually didn't end up blowing up the code (and therefore kept the bug
hidden until now) because of the way realloc reused memory - we just had
trailing garbage strings. But sometimes we weren't so lucky and the strlen
in a subsequent call of buffer_insert would run past the end of the
allocated buffer.
Oops.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We are accessing offset 24 in an array of length 24. To make things easier
for the base64 conversion we just treat this as an off by three error and
instead create an array large enough for 27 elements and convert a
sufficient number of base64 chars to initialize all of them.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Simply call "make LIBDCDEVEL=1" and the libdivecomputer includefiles are
expected in ../libdivecomputer/include and the actual library is linked
from ../libdivecomputer/src/.libs
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This may seem like an esoteric corner case, but it will actually happen
reliably when re-downloading dives from the Uemis SDA:
If the user selects "Force downloada of all dives" in the "Download from
divecomputer" dialog and if the SDA runs out of space and needs to be
unmounted and remounted, then for the 'Retry' the 'force' flag should be
cleared - or the user will once again start from the first dive which
almost certainly is not what they expect.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we ran out of space on the Uemis SDA during download and the user
unmounted, unplugged and replugged the SDA, we need to take care to
correctly reset the file number we use for finding the correct ANS file.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Start every step with much longer timeouts (until we get the first
response back), but then use shorter timeouts once we have started
receiving data.
This uses up fewer of the ANS files and allows us to get more dives
downloaded before the SDA has to be unplugged to reset communications,
yet at the same time it still improves the overall download time.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This mirrors commit 59929fdb5d2a "Mark divelist changed as we download
dives from a dive computer" which only fixed things for the
libdivecomputer case.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The GPS parsing and saving was using sscanf and sprintf respecively, and
since it is using floating point values (boo!) that affects both of
them. In a C/US locale, we use a period for decimal values, while most
European locales use a comma.
We really should probably just fix things to use integer values (degrees
and nanodegrees?) but this is the simplest fix/workaround for the issue.
Probably nobody ever really noticed until I tested the Swedish locale
for grins, since we don't have a good way to actually set the GPS
coordinates yet. I've got a few dives with GPS information that I
entered manually.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Most of the dive computers I have access to don't do the whole surface
event thing at the beginning or the end of the dive, so when you merge
two consecutive dives, you got this odd merged dive where the diver
spent the time in between at a depth of 1.2m or so (whatever the dive
computer "I'm now under water" depth limit happens to be).
Don't do that. Add surface events at the end of the first dive to be
merged, and the beginning of the second one, so that the time in between
dives is properly marked as being at the surface.
The logic for "time in between dives" is a bit iffy - it's "more than 60
seconds with no samples". If somebody has dive computers with samples
more than 60 seconds apart, this will break and we may have to revisit
the logic. But dang, that's some seriously broken sample rate.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit 38c79d149d ("Simplify and clean up dive trip management")
simplified the code a bit *too* much, and removed the check for
"dive->selected".
As a result, trying to delete a dive resulted in *all* dives being
deleted.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
"record_dive()" won't do that, since otherwise we'd mark the dive list
changed when we load it from an XML file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This allows zooming in with the scroll-wheel if you have one (or the
two-finger scrolling on a touchpad).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Prervent tiny temperature changes from being exaggerated in the plot.
Also, shift pressure plot around a bit (if necessary) to prevent it from
ending in the same space as the temperature plato on the profile graph.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
THe Uemis SDA allows the user to set it up for salt water and fresh water
use. We should take this into consideration for the water pressure to
depth conversion.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In my excitement about extracting these from libdivecomputer I forgot to
actually store them and then parse them again. Oops.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We generate the average temperature statistics by adding up the
(converted to user unites - not in millikelvin) temperatures and then
dividing by the number of dives we've added up over.
HOWEVER.
We did that summing of the temperatures into an integer variable, even
though the converted temperatures are floating point. So things got
rounded down to integers and the average temperature was just bogus
(although reasonably close).
We could do the summing of the temperatures in millikelvin and only
doing the conversion to the user at the very end. But the smaller patch
is to just change the accumulator to a double value.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The code in commit f99e1b476b18 "Trim the dive to exclude surface time at
beginning and end" failed rather badly if a dive has no samples at all -
which is true for many of our test dives.
This makes sure that we don't exclude data points if we never set up start
and end times.
Reported-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit 6c52e8a2e5 ("Add plotting of the deco ceiling") for some
totally unexplained reason deleted one "else" statement, resulting in
some plot events not having a time at all. Which causes various really
odd issues if you hit that situation, including divide-by-zero etc due
to the difference in times between events being nonsensical.
It's just some odd mistake that was entirely unrelated to the other
changes in that commit.
Add the missing line back in.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I think I prefer the 2.5x zoom over the pure doubling.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>