This will parse the date and time information on CSV import if the file
name matches the one used by APD log viewer (date and time are available
in the file name). Hard coding the year to 20?? is a bit unfortunate,
but as there is only 2 digits in the year, we have to invent something.
And it would be quite optimistic to assume this will bite us back any
time soon :D
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Printed command line can be used to manually test the import function,
allowing faster testing of XSLT changes, and showing debug prints that
are discarded by Subsurface.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This allows us to parse the DL7 profile data (skipping the header and
footer)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We import a few logs that are archived in a zip file. E.g. divelogs.de
import is a zip file named with .dld extension. In case the zip file is
empty, we should return an error message that states that fact, not
parse error. This will also end the input file parsing cleaning up
the error message on the console.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When run with -v option, this prints local file names like the path
to the local git repository and the hash file.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
These were left unused by cleaning up the import function parameters.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Continuing the crusade against excessive number of parameters for some
functions. This should be the last of the import functions to be cleaned
up.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This removes the excessive amount of parameters on manual CSV import. We
just use appropriate string array than can be directly fed to XSLT
parsing.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
There was a bug in MKVI download tool that resulted in erroneous sample
times. This fix takes care of that and should work similarly as the
vendor's own.
Fixes#916
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we are trying to open an empty file, skip the attempts to parse the
input and report an appropriate error message.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we are running Subsurface in verbose mode, print the xsltproc command
line to test the XSLT manually.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When Subsurface is run with high enough verbosity level, generate
command line to test Seabear import manually with xsltproc.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It seems that adding parameters to the Seabear import has resulted in
the parameters to be missed. And this led to parsing error / recursion.
This patch tries to tackle the problem by introducing dynamic parameter
counter to the mix.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We do not really need the buffers when doing CSV import. Instead we use
dynamic memory allocation for the values. Note that Seeabear part is not
tested as it that import is not working for me anymore.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We need to start second search from the start of the buffer if \r\n
search returns nothing.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The import of setpoint values is tested with Seabear data.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This parses the basic metadata of a dive when importing Divinglog
database. (Discarding deleted dives is my best guess as I do not have
that in my sample dive.)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Remove several very noise messages on dive site handling (this seems to
work well now, so I think we can remove most of them - a couple were left
that indicate actual issues).
And also remove all the calls to "translate" when outputting data to
stderr. Error messages that indicate issues where the user will basically
have to come and ask the developers for help shouldn't be localized. They
should be in English to make it easier for us to figure out what's going
on.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Instead of replacing all the empty model tags after a csv to xml
transform with some text, just produce that text in the csv to xml
transform instead.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This adds support for importing individual O2 sensors from a CSV file,
e.g. an APD log viewer file.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Free memory returned from parse_mkvi_value()
Free memory returned from printGPSCoords()
Free memory allocated in added_list and removed_list
Free memory allocated when adding suffix to dive site name
Free memory allocated in cache_deco_state()
Free memory allocated in build_filename()
Free memory allocated in get_utf8()
Free memory allocated in alloc_dive()
Free memory allocated as cache but never used
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This was a poorly implemented hack when we executed the reverse geo lookup
in the main thread and opening a V2 file could take a very long time. We
need to do the "Welcome" message quite differently.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This creates the basis to allow other backends to be used with the cloud
storage infrastructure.
So far this should all just transparently continue to work. A user would
have to manually add the cloud_base_url entry to the CloudStorage section
in their config file in order to use a different backend server.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Once we failed to load data from cloud storage (for example the first time
we try to use it when the remote repository is empty), don't show git
related errors to the user. It's enough to tell them that the cloud
storage is empty.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The lower level functions will already report that things didn't connect
successfully, no reason to repeat it here (which then exposes the git
URL).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Also change the name of the enum and make sure all the inner functions get
passed the remote transport information.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This change once again tests if the remote can be reached. Even with a
fairly big data file and a medium speed internet connection the remote
sync is fast enough to call it nearly instantaneous. Maybe a couple of
seconds.
We may need more checks / different heuristics / warnings if the sync
didn't happen, etc. But for now this should allow more reasonable testing.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
OSTCTools is a windows based software by Robert Angeymar which performs
configuration upgrade, memory analysis and download tasks for H&W OSTC
devices.
Downloaded dives are stored in files (one archive each) with the raw
binary data heavily padded at the begining of the file, and some other
data not included in H&W dive header protocol as the device's serial
number.
The import function simply takes the raw data part of the file and lets
libdivecomputer do the parseing.
Then adds some additional info as OSTC reported dive number and serial
device number.
Please note that OSTCTools is *not* a real logging software, it simply
gets the DC raw data, so there isn't any information about dive site,
equipment and so.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Move extern declaration of function datatrak_import() to file.h, where it
fits better than in file.c
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
NL should be set only if there is an empty line in the input file. That
way the return if no empty line exists (simplistic test for Seabear CSV
file) makes more sense.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This add support for Seabear's new import format that is used by H3 and
T1. In the future also the Hudc should switch to the new format. The
main difference to the old one is that time stamps are no longer
recorded in the samples, but intervali is specified in the header.
The header contains other useful information as well that we should
build support for. E.g. surface pressure, gas mixes, GF, and mode might
be useful additions later on.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We need to take both \r\n and \n new line formats into account before
failing Seabear validity check on import. (Seabear file has empty lines,
if import file does not have one -> return.)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In commit 0ed4356fc2 ("Display slowness warning before opening a V2
file") I changed the xml parser to abort if it detects an V2 XML file so
that the main application can show a warning message (and eventually, a
few choices for the user) before parsing and processing older XML files.
The input that gets pre-processed by XSLT files claims to be V2 XML and so
the parser aborts - yet the code to show a warning and restarting the
parse isn't present in this code path, so XSLT based imports always fail.
This hack works around this by temprarily setting the variable that claims
that the warning has been shown to the user.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>