Free memory returned from parse_mkvi_value()
Free memory returned from printGPSCoords()
Free memory allocated in added_list and removed_list
Free memory allocated when adding suffix to dive site name
Free memory allocated in cache_deco_state()
Free memory allocated in build_filename()
Free memory allocated in get_utf8()
Free memory allocated in alloc_dive()
Free memory allocated as cache but never used
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This was a poorly implemented hack when we executed the reverse geo lookup
in the main thread and opening a V2 file could take a very long time. We
need to do the "Welcome" message quite differently.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This creates the basis to allow other backends to be used with the cloud
storage infrastructure.
So far this should all just transparently continue to work. A user would
have to manually add the cloud_base_url entry to the CloudStorage section
in their config file in order to use a different backend server.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Once we failed to load data from cloud storage (for example the first time
we try to use it when the remote repository is empty), don't show git
related errors to the user. It's enough to tell them that the cloud
storage is empty.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The lower level functions will already report that things didn't connect
successfully, no reason to repeat it here (which then exposes the git
URL).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Also change the name of the enum and make sure all the inner functions get
passed the remote transport information.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This change once again tests if the remote can be reached. Even with a
fairly big data file and a medium speed internet connection the remote
sync is fast enough to call it nearly instantaneous. Maybe a couple of
seconds.
We may need more checks / different heuristics / warnings if the sync
didn't happen, etc. But for now this should allow more reasonable testing.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
OSTCTools is a windows based software by Robert Angeymar which performs
configuration upgrade, memory analysis and download tasks for H&W OSTC
devices.
Downloaded dives are stored in files (one archive each) with the raw
binary data heavily padded at the begining of the file, and some other
data not included in H&W dive header protocol as the device's serial
number.
The import function simply takes the raw data part of the file and lets
libdivecomputer do the parseing.
Then adds some additional info as OSTC reported dive number and serial
device number.
Please note that OSTCTools is *not* a real logging software, it simply
gets the DC raw data, so there isn't any information about dive site,
equipment and so.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Move extern declaration of function datatrak_import() to file.h, where it
fits better than in file.c
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
NL should be set only if there is an empty line in the input file. That
way the return if no empty line exists (simplistic test for Seabear CSV
file) makes more sense.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This add support for Seabear's new import format that is used by H3 and
T1. In the future also the Hudc should switch to the new format. The
main difference to the old one is that time stamps are no longer
recorded in the samples, but intervali is specified in the header.
The header contains other useful information as well that we should
build support for. E.g. surface pressure, gas mixes, GF, and mode might
be useful additions later on.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We need to take both \r\n and \n new line formats into account before
failing Seabear validity check on import. (Seabear file has empty lines,
if import file does not have one -> return.)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In commit 0ed4356fc2 ("Display slowness warning before opening a V2
file") I changed the xml parser to abort if it detects an V2 XML file so
that the main application can show a warning message (and eventually, a
few choices for the user) before parsing and processing older XML files.
The input that gets pre-processed by XSLT files claims to be V2 XML and so
the parser aborts - yet the code to show a warning and restarting the
parse isn't present in this code path, so XSLT based imports always fail.
This hack works around this by temprarily setting the variable that claims
that the warning has been shown to the user.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Sequentially parses a file, expected to be a Datatrak/WLog divelog, and
converts the dive info into Subsurface's dive structure.
As my first DC, back in 90s, was an Aladin Air X, the obvious choice of log
software was DTrak (Win version). After using it for some time we moved to WLog
(shareware software more user friendly than Dtrak, printing capable, and still
better, it runs under wine, which, as linux user, was definitive for me). Then,
some years later, my last Aladin died and I moved to an OSTC, forcing me to
look for a software that support this DC.
I found JDivelog which was capable of import Dtrak logs and used it for some
time until discovered Subsurface existence and devoted to it.
The fact was that importing Dtrak dives in JDivelog and then re-importing them
in Subsurface caused a significant data loss (mainly in the profile events and
alarms) and weird location of some other info in the dive notes (mostly tag
items in the original Dtrak software). This situation can't actually be solved
with tools like divelogs.de which causes similar if no greater data loss.
Although this won't be a core feature for Subsurface, I expect it can be useful
for some other divers as has been for me.
Comments and issues:
Datatrak/Wlog files include a lot of diving data which are not directly
supported in Subsurface, in these cases we choose mostly to use "tags".
The lack of some important info in Datatrak archives (e.g. tank's initial
pressure) forces us to do some arbitrary assumptions (e.g. initial pressure =
200 bar).
There might be archives coming directly from old DOS days, as first versions
of Datatrak run on that OS; they were coded CP437 or CP850, while dive logs
coming from Win versions seems to be coded CP1252. Finally, Wlog seems to use a
mixed confusing style. Program directly converts some of the old encoded chars
to iso8859 but is expected there be some issues with non alphabetic chars, e.g.
"ª".
There are two text fields: "Other activities" and "Dive notes", both limited to
256 char size. We have merged them in Subsurface's "Dive Notes" although the
first one could be "tagged", but we're unsure that the user had filled it in
a tag friendly way.
WLog adds some information to the dive and lets the user to write more than
256 chars notes. This is achieved, while keeping compatibility with DTrak
divelogs, by adding a complementary file named equally as the .log file and
with .add extension where all this info is stored. We have, still, not worked
with this complementary files.
This work is based on the paper referenced in butracker #194 which has some
errors (e.g. beginning of log and beginning of dive are changed) and a lot of
bytes of unknown meaning. Example.log shows, at least, one more byte than those
referred in the paper for the O2 Aladin computer, this could be a byte referred
to the use of SCR but the lack of an OC dive with O2 computer makes impossible
for us to compare.
The only way we have figured out to distinguish a priori between SCR and non
SCR dives with O2 computers is that the dives are tagged with a "rebreather"
tag. Obviously this is not a very trusty way of doing things. In SCR dives,
the O2% in mix means, probably, the maximum O2% in the circuit, not the O2%
of the EAN mix in the tanks, which would be unknown in this case.
The list of DCs related in bug #194 paper seems incomplete, we have added
one or two from WLog and discarded those which are known to exist but whose
model is unknown, grouping them under the imaginative name of "unknown". The
list can easily be increased in the future if we ever know the models
identifiers.
BTW, in Example.log, 0x00 identifier is used for some DC dives and from my own
divelogs is inferred that 0x00 is used for manually entered dives, this could
easily be an error in Example.log coming from a preproduction DC model.
Example.log which is shipped in datatrak package is included in dives
directory for testing pourposes.
[Dirk Hohndel: some small cleanups, merged with latest master, support
divesites, remove the pointless memset() before free() calls
add to cmake build]
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This increases the limits when parsing CSV files with dive profiles,
allowing us to import bigger files in one go.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Add a new line when importing a CSV file, so we get the last dive
imported as well (in case file ends without last new line).
See #814
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Back in e544796199 ("Add missing divemaster field to the manual
import"), divemaster field got added without extending the array length.
This corrects that.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This parses the dive profile from Divesoft Freedom log file. Only the
depth profile is currently supported. There is also something wrong as
the log file cannot be given as parameter but must be opened or imported
once Subsurface is running. Note that so far no metadata is parsed.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This code sets the parameters properly to support the new fields in
manual CSV import.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This implements importing of dive profile and temperature graph along
with some meta data from a Cobalt Divelog database.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This needs to be the same string as used for the entry in the Import
dialog... since translate() is a macro in this .c file and defined to have
only two arguments, I'm using the NOOP3 macro to get this correctly added
to the translation sources, including the comment. That makes the code a
little odd, but seems to work.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We are quite inconsistent when it comes to reporting back errors.
One case where this caused somewhat unexpected behavior was when the
user would try to open a .csv file by passing it as command line
argument. The file was silently ignored, but treated as if it had been
opened successfully.
Now we issue a somewhat reasonable error message.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Seems that there was not enough space reserved for the whole mem buffer
when adding XML tags around CSV file. When unlucky, the metadata of
memory allocation was overwritten.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
After some feedback on the mailing list, bitwise XOR wasn't the
preferred way to build the gaschange event.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
After some feedback on the mailing list, these strings where preferred.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is based on the great work done by Søren Reinke's on his MKVI Logfile
Analyzer.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In commit 0d7c192e6e ("For CCR dives, the diluent cylinder is the
current cylinder") a few things got broken. This tries to undo those
changes and adds expanded XML output.
1) Calculate correct partial pressure of oxygen to be plotted on
dive profile, taking into account the oxygen sensor data.
Currently, erroneously, OC PO2 values are shown, due to an
erroneous calling parameter to fill_pressures().
2) Read start and end cylinder pressured correctly. some wrong
assignments were done in file.c. This is now corrected and the correct
cylinder pressures are shown in the equipment tab.
3) Write correct cylinder pressures to XML. Currently the data for
the two cylinders are written to XML the wrong way round
(diluent pressures = oxygen and vice versa).
4) Expand XML output:
a) Write oxygen sensor data to XML
b) Write no_of_02sensors to XML
Signed-off-by: willem ferguson <willemferguson@zoology.up.ac.za>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>