To allow parsing into arbitrary trip_tables, add the corresponding
parameter to the parsing functions and the parser state. Currently,
all callers pass the global trip_table so there should be no change
in functionality. These arguments will be replaced in subsequent commits.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This works to some extent to part of a sample log I received. However,
still quite a bit more work is needed.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
In d815e0c947 a dive_table pointer
was added to the parsing functions to allow parsing into tables
other than the global dive table. This will be necessary for undo of
import and implementation a cleaner interface. A few cases, notably
CSV and proprietary formats were forgotten.
Implement parsing into arbitrary tables also for these cases.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To enable undo of divelog-importing it is crucial that parse_file()
can parse into arbitrary dive tables.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Since all qt-helpers are defined in qthelper.cpp, there seems to be
no reason to have two include files. By unifying the two files,
duplication and inconsistencies are removed. The C++-only part is
simply compiled away with #ifdefs.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
There are ca. 50 constructs of the kind
same_string(s, "")
to test for empty or null strings. Replace them by the new helper
function empty_string().
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Make functions in core/file.c, core/parse.c and core/import-csv.c
that were not used outside their translation unit of static linkage.
parse_date is moved from core/file.c to core/import-csv.c, since it
is used only there.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The function isCloudUrl() was only called in one place, parse_file().
But, isCloudUrl() could only return true if the filename was of the
git-repository kind (url[branch]). In such a case, control flow would
never reach the point where isCloudUrl() is called, since
is_git_repository() returns non-NULL and the function returns early.
Therefore, remove this function. Moreover, adapt the affected if-statement
by replacing "str && !strcmp(str, ...)" with the more concise
"same_string(str, ...)".
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
When I unified the sample pressures in commit 11a0c0cc70 ("Unify
sample pressure and o2pressure as pressure[2] array") I did all the
obvious conversions, including the conversion of the Poseidon txt file
import:
case POSEIDON_PRESSURE:
- sample->cylinderpressure.mbar = lrint(val * 1000);
+ sample->pressure[0].mbar = lrint(val * 1000);
break;
case POSEIDON_O2CYLINDER:
- sample->o2cylinderpressure.mbar = lrint(val * 1000);
+ sample->pressure[1].mbar = lrint(val * 1000);
break;
which was ObviouslyCorrect(tm).
But as so often is the case, obvious doesn't actually exist. The old
"o2cylinderpressure[]" model had an implicit sensor associated with it,
and that implicit sensor mapping wasn't obvious, and didn't get fixed.
It turns out that the way the Poseidon sensor mapping works, the O2
cylinder is cylinder 0, and the diluent cylinder is cylinder 1, so just
use the add_sample_pressure() helper to set both sensor index and
pressure value.
And since we now do all the sensor indexing right, we can also get rid
of some manual cylinder sample pressure code, because the generic dive
fixup will just DTRT. It used to screw up because the diluent sensor
number was wrong before, and the import code tried to work around that
by hand.
Reported-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We currently carry two pressures around for all the samples and plot
info, but the second pressure is reserved for CCR dives as the O2
cylinder pressure.
That's kind of annoying when we *could* use it for regular sidemount
dives as the secondary pressure.
So start prepping for that instead: don't make it "pressure" and
"o2pressure", make it just be an array of two pressure values.
NOTE! This is purely mindless prepwork. It literally just does a
search-and-replace, keeping the exact same semantics, so "pressure[1]"
is still just O2 pressure.
But at some future date, we can now start using it for a second sensor
value for sidemount instead.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Datatrak import is called from parse_file() in file.c. This function
reads the full file to be imported into a memblock structure. It's
easier and more secure, to parse this buffer instead of the file itself.
These are the necessary changes in function datatrak_import()
declaration and call.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Apparently the refactoring changed these values to be returned directly
in seconds. Not sure why, but luckily we have test cases that discovered
the change.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Moving the GUI independent Seabear import functionality to Subsurface
core. This will allow Robert to call it directly from download from DC.
Tested with H3 against released and daily versions of Subsurface. The
result differs somewhat, but it is actually fixing 2 bugs:
- Temperature was mis-interpreted previously
- Sample interval for a dive with 1 second interval was parsed
incorrectly
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Not using lrint(f) when converting double/float to int
creates rounding errors.
This error was detected by TestParse::testParseDM4 failure
on Windows. It was creating rounding inconsistencies
on Linux too, see change in TestDiveDM4.xml.
Enable -Wfloat-conversion for gcc version greater than 4.9.0
Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
This will parse the date and time information on CSV import if the file
name matches the one used by APD log viewer (date and time are available
in the file name). Hard coding the year to 20?? is a bit unfortunate,
but as there is only 2 digits in the year, we have to invent something.
And it would be quite optimistic to assume this will bite us back any
time soon :D
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Printed command line can be used to manually test the import function,
allowing faster testing of XSLT changes, and showing debug prints that
are discarded by Subsurface.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Not sure why we claimed that this was successful when clearly it wasn't.
There's a risk that this could break something on the desktop, but it
makes no sense to me why that would be the right thing to do.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This allows us to parse the DL7 profile data (skipping the header and
footer)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We first check the sha to see if we want to load at all. But at that
point we already have the repository and the branch and we have synced
with the remote. So when we decide that we need to reload from storage,
we don't need to repeat those steps, instead we can go directly to the
git load.
For that to work we need to pass the repository pointer and the branch
name back to the caller so that we can directly call git_load_dives().
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Having subsurface-core as a directory name really messes with
autocomplete and is obviously redundant. Simmilarly, qt-mobile caused an
autocomplete conflict and also was inconsistent with the desktop-widget
name for the directory containing the "other" UI.
And while cleaning up the resulting change in the path name for include
files, I decided to clean up those even more to make them consistent
overall.
This could have been handled in more commits, but since this requires a
make clean before the build, it seemed more sensible to do it all in one.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>