We no longer need the remove infrastructure, and the edit nickname function
becomes much more intuitive to use by passing in the dive computer for
which we want to create a nickname instead of the internal index into
the array of devices.
This also removes / simplifies the device list update signals in the
DiveListNotifier.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This makes it much easier to manipulate dc nickname entries. In order
for that to work we can't simply remove entries with empty nickname (but
that isn't needed, anyway, as the code that saves XML or git already
handles that case correctly).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
... it just causes problems later when we free them, since we don't do
any reference counting.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When we save the divecomputer data, we never actually save the serial
value as a field. We used to rely on saving the very dodgy 'deviceid',
and then look up the serial number from there. And that never really
worked reliably, but we didn't really notice, because we never really
_used_ the serial number anywhere.
The only place the serial number is actually reliably displayed is in
the "Extra data" tab, which contains the key value pairs, and that's
where the original dive download code got the serial number from.
So just parse that at load time too, the same way we parsed it at dive
download time.
In fact, do the firmware version the same way, and remove the code from
the downloader, since it too can rely on 'add_extra_data()' just picking
up the information directly.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We have this odd legacy notion of a divecomputer 'device', that was
originally just basically the libdivecomputer 'EVENT_DEVINFO' report
that was associated with each dive. So it had firmware version,
deviceid, and serial number.
It had also gotten extended to do 'nickname' handling, and it was all
confusing, ugly and bad. It was particularly bad because it wasn't
actually a 'per device' thing at all: due to the firmware field, a dive
computer that got a firmware update forced a new 'device'.
To make matters worse, the 'deviceid' was also almost random, because
we've calculated it a couple of different ways, and libdivecomputer
itself has changed how the legacy 32-bit 'serial number' is expressed.
Finally, because of all these issues, we didn't even try to make the
thing unique, so it really ended up being a random snapshot of the state
of the dive computer at the time of a dive, and sometimes we'd pick one,
and sometimes another, since they weren't really well-defined.
So get rid of all this confusion.
The new rules:
- the actual random dive computer state at the time of a dive is kept
in the dive data. So if you want to know the firmware version, it
should be in the 'extra data'
- the only serial number that matters is the string one in the extra
data, because that's the one that actually matches what the dive
computer reports, and isn't some random 32-bit integer with ambiguous
formatting.
- the 'device id' - the thing we match with (together with the model
name, eg "Suunto EON Steel") is purely a hash of the real serial
number.
The device ID that libdivecomputer reports in EVENT_DEVINFO is
ignored, as is the device ID we've saved in the XML or git files. If
we have a serial number, the device ID will be uniquely associated
with that serial number, and if we don't have one, the device ID will
be zero (for 'match anything').
So now 'deviceid' is literally just a shorthand for the serial number
string, and the two are joined at the hip.
- the 'device' managament is _only_ used to track devices that have
serial numbers _and_ nicknames. So no more different device
structures just because one had a nickname and the other didn't etc.
Without a serial number, the device is 'anonymous' and fundamentally
cannot be distinguished from other devices of the same model, so a
nickname is meaningless. And without a nickname, there is no point in
creating a device data structure, since all the data is in the dive
itself and the device structure wouldn't add any value..
These rules mean that we no longer have ambiguous 'device' structures,
and we can never have duplicates that can confuse us.
This does mean that you can't give a nickname to a device that cannot be
uniquely identified with a serial number, but those are happily fairly
rare (and mostly older ones). Dirk said he'd look at what it takes to
give more dive computers proper serial numbers, and I already did it for
the Garmin Descent family yesterday.
(Honesty in advertizing: right now you can't add a nickname to a dive
computer that doesn't already have one, because such a dive computer
will not have a device structure. But that's a UI issue, and I'll sort
that out separately)
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If we download a first dive computer and add a dive site to the dive (by
setting a location name for example), and then download from another
dive computer that provides us with GPS data, we should keep the
existing dive site information, but add the GPS data from the freshly
downloaded dive computer.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This adds a cleanup function to be called after a divelogs.de upload
finishes (successful or not) to make sure the temporary zip file is
closed and removed.
Signed-off-by: Richard Fuchs <dfx@dfx.at>
On multi-user systems with a shared directory for temporary files, using
a static file name can lead to permissions problems and subsequent
errors due to collisions. Use a random unique file name for each
generated file to avoid these problems.
Note: the temporary file generated from the divelogs.de upload is still
left behind after the upload finishes.
Signed-off-by: Richard Fuchs <dfx@dfx.at>
The intent of the code was to check that there is a string and it has at least
two characters. Since iter is the result of a strchr(iter, '|') call, we
know that if iter isn't NULL, iter[0] is '|', so we only need to check the next
character.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
try_to_xslt_open_csv() re-allocates the memory passed in (not really great as
far as design goes, maybe something that should be reimplemented). Doing
pointer arithmatic with the returned base pointer results in garbage, unless
one gets super lucky and the realloc manages to not move the memory.
It's a wonder this ever worked.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Before making the cylinder-table dynamic, dives always
had at least one cylinger. When such a dive is displayed,
the TabDiveInformation class calls per_cylinder_mean_depth().
If there are no samples, this function generates a "fake
profile" with fake_dc(). Thus, effectively dives always
had samples once the user was displaying them.
When the cylinder-table was made dynamic, dives without
cylinders were supported. This can notably happen, when
importing from CSV (this could actually be a bug).
per_cylinder_mean_depth() exits early in that case and
doesn't create a fake profile. This lead to crashes
of the profile-widget, which were fixed in 6b2e56e513.
Non-sample dives were now shown with the Subsurface-logo.
To restore the previous behavior, genarate a fake profile
for sample-less dives in fixup_dive(), which is called
anytime a dive is loaded or imported. This seems to
have been the intention anyway and this worked only
"by chance". This will make a few fake_dc() calls obsolete,
but so be it.
Since fake profiles are now generated on loading,
the parse-tests need to be fixed to account for that.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When merging two dives, the higher CNS value was taken. This could
result in inconsistent CNS values if two dives were merged where
one dive's CNS was calculated from a "fake profile", i.e. a dive
without dive-computer profile. In that case, the most conservative
value (all time spent at the bottom) was assumed. The merged dive
then consisted of the dive-computer profile and the conservative
CNS estimate.
This is fixed by setting the CNS value to "0" after merging,
which means "unknown". The correct value will then be recalculated
in "fixup_dive" from the actual sample data.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The data of the membuffer is passed as a data/length pair
to xmlReadMemory(). There is no point in NUL-terminating it.
Moreover, pass the data directly to xmlReadMemory()
instead of via variables. These variables are reused
later with a different meaning, making this super-confusing.
The membuf variable is turned from "const char *" to "char *"
to signal that we own the buffer.
Amazingly, zip_source_buffer() frees the buffer, even though
a "const void *" is passed in. This API is pure madness. Add
a comment.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Thus, the membuffer data is automatically freed when going
out of scope - one thing less to worry about.
This fixes one use-after-free bug in uploadDiveLogsDE.cpp
and one extremely questionable practice in divetooltipitem.cpp:
The membuffer was a shared instance across all instances
of the DiveToolTipItem.
Remves unnecessary #include directives in files that didn't
even use membuffer.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
C-style memory management is a pain and nearly nobody seems to get
it right. Add a C++-version of membuffer that frees the buffer
when it gets out-of-scope. Originally, I was thinking about
conditionally adding a constructor/destructor pair when compiling
with C++. But then decided to create a derived class membufferpp,
because it would be extremely confusing to have behavioral change
when changing a source from from C to C++ or vice-versa.
Also add a comment about the dangers of returned pointer: They
become dangling on changes to the membuffer.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The sensor-id in the sample struct was a uint8_t, with all
the known problems of unsigned integers. In the rest of the
code cylinder ids are signed integers. To avoid confusion,
make it a signed int. int8_t should be enough (max. 127
cylinders). To allow for degenerate cases, use an int16_t.
16k cylinders should be enough for everyone.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The code will happily perform out-of-bound accesses if
pressure-sensors refer to non-existing cylinders. Therefore,
sanitize these values in fixup_dive(), which is called
everytime a dive is loaded or imported.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
By default, the parser would create samples with cylinder
ids 0 and 1. This creates out-of-bound accesses for the
common one-cylinder (or even no-cylinder) dives. These
were harmless when the cylinder-table was of a fixed size.
Since changing to a dynamic cylinder-table, these became
actual out-of-bound accesses. Don't create such samples
in the parser.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The sensor member of sample refers to a cylinder from which
the pressure was read. However, some dives don't even have
a cylinder. Therefore, introduce a special NO_SENSOR value
for these dives. Since the cylinder is given as a uint8_t,
0xff seems to be a sensible choice.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
When user has selected to show unused cylinders in equipment tab,
respect this setting when exporting to divelogs.de.
Fixes#3277
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
This is adding the capability to select 'Dive number' and 'Date / Time'
in the 'Copy dive components' dialog, and then copy them into the
clipboard.
When using 'Paste dive components, these values will then be pasted into
the selected dive(s).
This is intended to help with workflows that import dive information
from two different sources, like general information from another
logging program, and CCR ppO2 sensor readings from a unit log, and then
stitch them together into one cohesive entry with all data per dive.
Copied data is also output into formatted text when pasting the
clipboard outside of the application:
```
Dive number: 401
Date / time: Sun 2 May 2021 12:00 AM
```
No translations have been added as of now - I could not find any
information on how strings are translated for this project.
Signed-off-by: Michael Keller <github@ike.ch>
The color was misnamed, since it has only been used for the
duration line for quite some time (since 893bea700c to be
exact).
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
the last manually entered waypoint but consider the
possibility that it should first top where we are
before the next stop depth has cleared.
Reported-by: David Carron
Signed-off-by: Robert C. Helling <helling@atdotde.de>
It looks like libxml2 has some internal limitations by default that
causes parse failures in some situations. Avoid them with
XML_PARSE_HUGE.
Without this, you get errors like
test.xml:349250: parser error : internal error: Huge input lookup
όμουν τουλάχιστον αλλά +2kg και ενδεχομένως +4
^
when something in the xml file grows too large.
I don't know libxml2 internals, so I have no idea what exactly goes
wrong, but the docs say:
XML_PARSE_HUGE = 524288 : relax any hardcoded limit from the parser
and that makes us successfully parse the Greek file from Kostas.
Reported-by: Kostas Katsioulis <kostaskatsioulis@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move the ARRAY_SIZE macro into a header file and use it to determine the
number of cloud servers that we need to check.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we can't reach the cloud server in the URL (which might come from the
settings or be passed in by the user), we try the alternative server(s).
If we end up changing servers, we need to update the remote that we have
already parsed from the URL.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we can't reach our preferred server, try using a different one.
The diff makes more sense when ignoring white space.
With this we check the connection to the cloud server much earlier and
in case of failure to connect try a different cloud_base_url.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
With the new names for the cloud server we'd get different local cache
directory names depending on which server gets used. In order to avoid
that, normalize the name before generating the hash that determines the
local directory name.
Additionally, the old code had an extra '/' in the URL, due to the way
the URL was assembled. Again, to match the existing hash for people
upgrading from older Subsurface versions, add that to our normalized
name as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The backend infrastructure will soon be able to support more than one
cloud server which automagically stay in sync with each other.
One critical requirement for that to work is that once a session was
started with one of the servers, the complete session happens with that
server - we must not switch from server to server while doing a git
transaction. To make sure that's the case, we aren't trying to use DNS
tricks to make this load balancing scheme work, but instead try to
determine at program start which server is the best one to use.
Right now this is super simplistic. Two servers, one in the US, one in
Europe. By default we use the European server (most of our users appear
to be in Europe), but if we can figure out that the client is actually
in the Americas, use the US server. We might improve that heuristic over
time, but as a first attempt it seems not entirely bogus.
The way this is implemented is a simple combination of two free
webservices that together appear to give us a very reliable estimate
which continent the user is located on.
api.ipify.org gives us our external IP address
ip-api.com gives us the continent that IP address is on
If any of this fails or takes too long to respond, we simply ignore it
since either server will work. One oddity is that if we decide to change
servers we only change the settings that are stored on disk, not the
runtime preferences. This goes back to the comment above that we have to
avoid changing servers in mid sync.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We know the preference is never empty, so stop testing for this. But
don't maintain two different preferences with basically the same
content. Instead add the '/git' suffix where needed and keep this all in
one place.
Simplify the extraction of the branch name from the cloud URL.
Also a typo fix and a new comment.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This was a hack for a very early SSL certificate that was rejected on
some platforms. We haven't used that one in ages, so let's just remove
the whole hack - but always show in the console output when there was an
SSL error.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The dive selection was initialized during data-reset. However,
this emitted a signal before all data-reset routines were run.
Ultimately, this led to access-after-free in the statistics code.
Instead, move the select_newest_visible_dive() signal from the
divelist-model to the process_loaded_dives() function. There
is no point in initializing the selection if the dive data
is cleared after all.
This change broke closing of the log, because the UI-selection
was not reset. Therefore, when clearing the data, clear the
selection before proceeding with clearing.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
In 9bfc6d252, testing of the planner was changed to use the
planner_ds parameter instead of a global variable.
Unfortunately, two conditionals were inverted, leading to
an erroneous ceiling calculation when in the planner.
Restore the proper conditions. Moreover, instead of testing
the planner_ds parameter, use the already existing in_planner
flag, which is derived from said parameter.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Both the calculations for CNS and OTU did not take
into account the pO2 drop when using a PSCR. Furthermore,
there was some unit confusion due to not using internal
units.
Reported-by: arosl
Signed-off-by: Robert C. Helling <helling@atdotde.de>
When parsing "event 123" (?) a picture is added, without
initializing the picture structure. Thus, a picture with a
random gps location is added.
Use the "empty_picture" initializer to avoid that. Fixes a
Coverity warning.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We used to round the ceilings for the individual tissues with
%.1f but the maximal (and thus effective) ceiling only with
%.0f. This makes no sense or be rounded up (to the conservative
side).
This commit shows also the maximal ceiling with higher accuracy.
Reported-by: Peter Hübner
Signed-off-by: Robert C. Helling <helling@atdotde.de>
When doing OC bailout from a CCR dive, there could still
be pO2 sensor readings but those are not valid.
This fixes a problem noticed by Justin Ashworth.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Even when diving a CCR, the pO2 cannot exceed ambient
pressure. This only makes a difference at shallow depths.
Fix this in the calculation of OTUs and CNS.
This affects some tests that now have slightly different CNS and OTU values.
Suggested-by: Justin Ashworth
Signed-off-by: Robert C. Helling <helling@atdotde.de>
We had a user request to allow for setpoint changes
at certain depths for CCR deco.
You can now enter a cylinder with name like
"SP 1.4" ('S' and 'P' and ' ' and a float) with
a switch depth and that cylinder is interpreted as
a depth dependent setpoint switch.
This user interface is a hack. But I believe that such
setpoint changes are similar enough to gas switches during
deco and should thus be handled in a simiar manner.
I would be happy to hear ideas how this could be made
less easter eggish.
Suggested-by: Justin Ashworth
Signed-off-by: Robert C. Helling <helling@atdotde.de>
The application state is a desktop-only thing. The mobile UI
also has its application state, but that is something completely
different.
The last remaining user of the application state was to flag
whether the planner is active. Since this has all been
unglobalized, the ApplicationState structure can be moved
from core to the desktop UI. And there it can be made local
to the MainWindow class.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The TemplateLayout prints different dives depending on
whether the planner is active. Instead of accessing a
global variable, pass the status down from the MainWindow.
That's all quite convoluted, since there are multiple
layers involved.
On the positive side, the in_planner() function has now
no users an can be removed.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To remove reliance on global state, pass an "in_planner" argument
to decoMode(). Thus, calls to in_planner() can be removed.
This is a more-or-less automated change. Ultimately it would
probably be better to pass the current deco-mode to the affected
functions instead of calling decoMode() with an in_planner
parameter.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To remove reliance on global state, pass an "in_planner" argument
to clear_deco(). Thus, calls to in_planner() can be removed.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To remove reliance on global state, pass an "in_planner" argument
to vpmb_next_gradient(). Thus, calls to in_planner() can be removed.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To remove reliance on global state, pass an "in_planner" argument
to add_segment(). Thus, calls to in_planner() can be removed.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>