We have two different models for setting the deviceid associated with a
dive computer: either take the value from the libdivecomputer 'devinfo'
field (from the DC_EVENT_DEVINFO event), or generate the device ID by
just hashing the serial number string.
The one thing we do *not* want to have, is to use both methods, so that
the same device generates different device IDs. Because then we'll
think we have two different dive computers even though they are one and
the same.
Usually, this is not an issue, because libdivecomputer either sends the
DEVINFO event or gives us the serial number string, and we'll always
just pick one or the other.
However, in the case of at least the Suunto EON Steel, I intentionally
did *not* send the DC_EVENT_DEVINFO event, because it gives no useful
information. We used the serial number string to generate a device ID,
and everything was fine.
However, in commit d40cdb4755ee ("Add the devinfo event") in the
libdivecomputer tree, Jeff started generating those DC_EVENT_DEVINFO
events for the EON Steel too, and suddenly subsurface would start using
a device ID based on that instead.
The situation is inherently ambiguous - for the EON Steel, we want to
use the hash of the serial number (because that is what we've
historically done), but other dive computers might want to use the
DEVINFO data (because that is what _those_ backends have historically
done, even if they might also implement the new serial string model).
This commit makes subsurface resolve this ambiguity by simply preferring
whatever previous device ID it has associated with that particular
serial number string. If you have no previous device IDs, it doesn't
matter which one you pick.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
There might be some spurious setpoint changes at t=0 without
an actual value (I have no idea where those come from). In
any case, those do not indicate that the dive is a CCR dive.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
There were two problems in the calculation of tissue pre-
saturation from previous dives: We added the surface interval after
the dive rather than before and when also took dives into accout
that were in the future of the dive considerd.
Reported-by: Timothy Massey
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We could create a plot_info data that didn't contain all the time slots
for the events fromt he dive computer, which would terminally confuse
the plotting of the event profile widgets because it couldn't match up
the event with the dive plot data model.
So for example, in DiveEventItem::recalculatePos(), when the code tries
to figure out the spot in the data model, it could fail, and then try to
hide the event (because without the data model information it doesn't
know where it should go). But that hiding would then not match the
logic in DiveEventItem::shouldBeHidden(), and the event would end up
being shown in the upper left-hand corner of the profile after all.
The reason the plot_info data wouldn't contain the time slots is that
the slots are allocated primarily for the sample data, and then the
events would be added in between sample data in populate_plot_entries().
But since we'd only add the event pointer *between* samples, that would
mean that events after the last samples would not get plot-info points
allocated to them.
That issue was exacerbated by how we also truncate uninteresting samples
at the end when some dive computers end up giving a long stream
(possibly several minutes) of "at the surface" events before they
finally turn off logging.
This makes sure that we take the event timestamps into account for the
"maxtime" calculation, and also that we finish populating the plot_info
data with any final event timestamps.
Now all the events will have a matching plot_info entry.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I missed the fact that not only did we skip importing surface events
from the dive computer, we had also made our xml parser ignore them when
loading an xml file. All part of our historical "let's ignore surface
events because dive computers are being very annoying about it".
Signed-off-by: Linus Torvalds <torvalds@linux-foundtion.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
There are cases where we actually want to keep them, as exemplified by
this situation from Richard Yorke:
"I have just come across a situation when ignoring the surface marker
is a disadvantage. I have just had a problem with my BC feed
seeping, slowly filling my BC and as I control my buoyancy on the
bottom using the air in my drysuit, I did not notice, so that when I
came to ascend the expanding air in my BC caused a loss of control.
Fortunately not from a great depth and no untoward consequences.
However, the Subsurface profile only shows me rising to 4m and
descending to 5.5m for my safety stop. However I actually broke the
surface and descented to 5.5 but the frequency of recording depth was
not fast enough to show this as it was so brief"
so remove the code that ignores the surface events entirely.
I think we'll have to come up with some smarter filtering model for
showing them, but that is predicated on getting these events to come
through in the first place.
Reported-by: Richard Yorke <yorke.richard@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Our primary dive computer really is special, not just because it's the
first one: it's directly embedded in the "struct dive", and so if you
just walk the divecomputer list, you'll miss it, because it's not _on_
the list, it is the very head _of_ the list.
We had that bug in copy_dive(), and it turns out we have it in
clear_dive() too: clear_dive() would free all the dive computers on the
list, but not the actual primary one.
This is a minor memory leak, no more, so it's not exactly critial, but
let's just do it right.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
A few basic rules for gas validation:
We can't have <0%, or >100% of either O2 or He
O2 + He must not be >100%
Switch depth can't be <0%
This places limits on user-input values
Signed-off-by: Rick Walsh <rickmwalsh@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
For starters, let's just state that this dive was downloaded from
Shearwater. However, once we have information how model numbers map to
names, we can use that info for the models we know about.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The previous patch (Planner: add best mix EAD preference) used the term EAD
(equivalent air depth) in variable names and strings, when it should have been
END (equivalent narcotic depth).
They're not the same thing and shouldn't be confused.
Signed-off-by: Rick Walsh <rickmwalsh@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Add best mix EAD preference and UI, along with a tooltip describing what it
does
Signed-off-by: Rick Walsh <rickmwalsh@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Best mix O2 calculated based on planner Bottom O2 preference
Best mix He calculated based on EAD of 30m (should be made user-configurable)
Signed-off-by: Rick Walsh <rickmwalsh@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This adds autocompleting text input fields for suit, buddy and
divemaster.
[Dirk Hohndel: some whitespace cleanup]
Signed-off-by: Joakim Bygdell <j.bygdell@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This generated the QStringLists needed to populate the combobxes
in DiveDetailsEdit.
Signed-off-by: Joakim Bygdell <j.bygdell@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This will parse the date and time information on CSV import if the file
name matches the one used by APD log viewer (date and time are available
in the file name). Hard coding the year to 20?? is a bit unfortunate,
but as there is only 2 digits in the year, we have to invent something.
And it would be quite optimistic to assume this will bite us back any
time soon :D
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Printed command line can be used to manually test the import function,
allowing faster testing of XSLT changes, and showing debug prints that
are discarded by Subsurface.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
No point in having it defined in each main program's .cpp. Especially
since the unit tests don't define them.
Signed-off-by: Thiago Macieira <thiago@macieira.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Otherwise we keep downloading the same image multiple times instead
of new images.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Not sure why we claimed that this was successful when clearly it wasn't.
There's a risk that this could break something on the desktop, but it
makes no sense to me why that would be the right thing to do.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is a feature that many people have asked for. This implementation is
somewhat simplistic because we simply use a different name for the
program settings - but interestingly enough this appears to be enough to
capture a lot of the core functionality that people are looking for in
multi-user support.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Otherwise we step on our own feet when downloading several images,
like after import from divelogs.de with many linked images.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This seems to work around the crazy QDateTime::fromTime_t() problem in Qt.
It is *very* lightly tested. In fact, the only test is that "test0.xml"
change that is part of this patch.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It turns out that we are starting to have users that have logs that go
back that far. It won't be common, but let's get it right anyway.
NOTE! With us now supporting dates earlier in 1900, this also makes
"utc_mktime()" always add the "1900" to the year field. That way we
avoid ever using the fairly ambiguous two-digit shorthand.
It didn't use to be all that ambiguous when we knew that any two-digit
number less than 70 had to be 2000+. Now that we support going back to
earlier in the last centiry, that certainty is eroding.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
So far we didn't do that at all, we either relied on the user manually
creating a local repo, or we cloned a remote repo.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we have incorrect cloud credentials, we need to return an error on
git authentication call back in order to avoid endless authentication
loop. This might well happen e.g. when changing the password on desktop
and then on laptop Subsurface still thinks the credentials are validated
and ends up in the authentication loop.
The authentication call back on libgit is intended to be used to ask for
user credentials, and as we handle credentials elsewhere, we just need
to fail the authentication attempts. (The threshold for bail out could
have been 1 attempt...)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This allows us to parse the DL7 profile data (skipping the header and
footer)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Make them use indices into the plot-info, fix calculation of average
depth, and fix and add comments.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Qt 5.6.0 is broken when it comes to using CoreLocationService on iOS.
It doesn't even check if the location service is enabled. My patches fix
that and make Qt set an error code right after service creation. Having
the service creation fail is actually the wrong thing to do because then
Qt switches over to GeoClue and that really isn't helpful for our needs
here.
Additionally, Qt 5.6.0 without my patches doesn't follow the REQUIRED
flow of using the location service as it does not check the access
permissions before accessing the GPS service - without doing so the
GPS service will not run in the background.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If the GPS source returns an error that could be an indication that the
user hasn't given us permission to use it, so switch our status to NOGPS.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Initially we don't know if we have a source. After that we may think
that we have one, or not have one (but that can actually change while
the program is running if the user, for example, turns the source off
or switches to airplane mode).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I couldn't figure out how to break this down into small, useful commits.
Part of the problem is that I kept going while working on this and as you
can see from looking at the commit, diff tries so hard to find small code
fragments that moved around, that the diff overall becomes quite
unreadable and it seemed impossible to recreate the sequence of steps
after the fact.
It all started with adding the parsing for the GPS coordinates. But while
testing that code I found several issues with the rest of the function.
Most importantly it seemed ridiculous that we carefully tried to match the
texts that the DiveObjectHelper would create for the various fields,
instead of just using the DiveObjectHelper to do just that. And once I had
converted that I once again realized just how long and hard to understand
that function was getting and decided to break out some of the more
complex parts into their own helper functions.
But of course all this didn't happen in this logical, structured, ordered
way. Instead I did all of these things at the same time, testing,
rearranging, etc.
So in the end I went with one BIG commit that does all of this in one fell
swoop.
This adds four helper functions to deal with start time/date, duration,
location and gps coordinates, and depth of the dive.
To avoid mistakes when dealing with the GPS coordinates, there's another
helper to encapsulate the creation of the dive site and we switched to a
current GPS location.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>