The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We had various random "free parts of the git info" left-overs from when
we passed down the git repo data ad-hoc. Get rid of it, and replace it
with just doing a 'cleanup_git_info()' that does the final cleanup of it
all.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
That function name was incomprehensible. What did it check? And what
did the return value mean?
So let's rename it to something that actually describes what it does,
and reverse the meaning of the return value while at it.
So now it's called 'remote_repo_uptodate()', and it returns true if the
remote repository branch has the same value as our 'saved_git_id'.
It's still a bit obscure, but at least within the context of the only
user, the code now makes _more_ sense than it used to:
if (remote_repo_uptodate(fileNamePrt.data(), &info)) {
appendTextToLog("Cloud sync shows local cache was current");
but maybe we could come up with even better semantics and naming, and
make it even clearer.
Requested-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We have this nasty habit of randomly passing down all the different
things that we use to look up the local and remote git repository, and
the information associated with it.
Start collecting the data into a 'struct git_info' instead, so that it
is easier to manage, and easier and more logical to just look up
different parts of the puzzle.
This is a fairly mechanical conversion, but has moved all the basic
information collection to the 'is_git_repository()' function. That
function no longer actually opens the repository (so the 'dry_run'
argument is gone, and instead a successful 'is_git_repository()' is
followed by 'opn_git_repository()' if you actually want the old
non-dry_run semantics.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We know the preference is never empty, so stop testing for this. But
don't maintain two different preferences with basically the same
content. Instead add the '/git' suffix where needed and keep this all in
one place.
Simplify the extraction of the branch name from the cloud URL.
Also a typo fix and a new comment.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
So far, we added a non-global device table to the parser states.
Now, create device nodes in that table instead of in the global
table. Thus, on undo of dive-import, the new device nodes will
be removed.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
In analogy to the xml-parser add a device-table to git's parser-state.
Currently this is unused. In upcoming commits the git parser will
then be changed to add device nodes in this table instead of the
global device table. The long-term goal being to detach the
parsers from global state and to make dive-import fully undoable.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
If we want to avoid the parsers to directly modify global data,
we have to provide a device_table to parse into. This adds such
a state and the corresponding function parameters. However,
for now this is unused.
Adding new parameters is very painful and this commit shows that
we urgently need a "struct divelog" collecting all those tables!
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We used a typedef "filter_preset_table_t" for the filter preset table,
because it is a "std::vector<filter_preset>". However, that is in
contrast to all the other global tables (dives, trips, sites) that we
have.
Therefore, turn this into a standard struct, which simply inherits
from "std::vector<filter_preset>". Note that while inheriting from
std::vector<> is generally not recommended, it is not a problem
here, because we don't modify it in any shape or form.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This is mostly copy and paste of other git loading code. Sadly,
it adds a lot of state to the parser-state. I wish we could pass
different parser states to the parser_* functions.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This is a bit painful: since we don't want to modify the filter
presets when the user imports (as opposed to opens) a log,
we have to provide a table where the parser stores the presets.
Calling the parser is getting quite unwieldy, since many tables
are passed. We probably should introduce a structure representing
a full log-book at one point, which collects all the things that
are saved to the log.
Apart from that, this is simply the counterpart to saving to XML.
The interpretation of the string data is performed by core
functions, not the parser itself to avoid code duplication with
the git parser.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Dives for the seac action computer are imported by the seacsync
program into two tables in an sqlite3 database.
The dive information is read from the headers_dive table.
The dive_data table is then queried for each dive to get samples.
The seac action computer is the only current supported computer
by the seacsync program. It only supports two gas mixes, so the
parser will toggle between two cylinders whenever it detects a
change in the active O2 mix.
Dive start time is stored in UTC with a timezone offset.
A helper function to read this was added to qthelper.
Default cases have been added to some switch statements
to assist in future development for other dive types and
salinity.
Example database has been added to ./dives/TestDiveSeacSync.db
Signed-off-by: James Wobser <james.wobser@gmail.com>
WLog is a Win32 based ancient shareware program whose target was:
1) fully support divelogs coming from DataTrak (DOS or Win)
2) fill some meaningful data which wasn't supported by Uwatec software
3) have a more user-friendly GUI than Datatrak had
The problem achieving goals 1) and 2) at the same time was solved by
adding a complementary file with .add extension and - mandatory - same
base name than .log file (including directory tree).
This .add file has a fixed structure composed of a 12 bytes header,
including file type check and Nº of dives following; then a fixed 850
bytes size for each dive in the log file. Data fields size and position
are fixed inside these blocks and heavily zero padded, so they are easy
to parse.
A serious restriction imposed to the WLog user was *Do not edit the logs
with other software than Wlog*; this was due the order of dives in .log
file being the same than the order of dives in .add file. Thought you
could show a WLog divelog in Datatrak, editing it resulted in mixing all
extended data for dives following the edited one.
Thus, we have to trust files are correct and is to the user ensure this
is so. If extended data are mangled, they are mangled in WLog too and we
are not trying to fix the mess, just importing.
On the technical side, we try to be smart about tank names as neither
DataTrak nor WLog record them. So we just take the first tank in users
list matching the volume recorded in WLog.
For weights we add a translatable "unknown" string as an empty string
results in weight not being shown in subsurface-mobile (which could be a
reportable issue, BTW).
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
parse_file() refused to load from a git repository if we already
had that repository and there were no changes. However, this only
checked the global divelist_changed flag, which does not track
undo-commands. Thus, after editing dives the user couldn't reload
from git.
Remove this check. It is somewhat questionable that the io layer
refuses to load from a repository anyway. Let the caller decide.
There appears to be a check_git_sha function for that purpose(?).
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The git parser loads into the global dive table, even if it
is called indirectly via parse_file(). However, parse_file()
may be given a different table. Fix this by extending the
git parser state.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Move the declarations of the "report_error()" and "set_error_cb()"
functions and the "verbose" variable to errorhelper.h.
Thus, error-reporting translation units don't have to import the
big dive.h header file.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Move the declaration of these functions to "file.h" and "parse.h"
according to the translation unit they are defined in. Thus, not
all users of "dive.h" have to suck in "sqlite3.h".
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To extend the undo system to dive sites, the importers and downloaders
must not parse directly into the global dive site table. Instead,
pass a dive_site_table argument to parse into.
For now, always pass the global dive_site_table so that this commit
should not cause any functional change.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To allow parsing into arbitrary trip_tables, add the corresponding
parameter to the parsing functions and the parser state. Currently,
all callers pass the global trip_table so there should be no change
in functionality. These arguments will be replaced in subsequent commits.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This works to some extent to part of a sample log I received. However,
still quite a bit more work is needed.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
In d815e0c947 a dive_table pointer
was added to the parsing functions to allow parsing into tables
other than the global dive table. This will be necessary for undo of
import and implementation a cleaner interface. A few cases, notably
CSV and proprietary formats were forgotten.
Implement parsing into arbitrary tables also for these cases.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To enable undo of divelog-importing it is crucial that parse_file()
can parse into arbitrary dive tables.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Since all qt-helpers are defined in qthelper.cpp, there seems to be
no reason to have two include files. By unifying the two files,
duplication and inconsistencies are removed. The C++-only part is
simply compiled away with #ifdefs.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
There are ca. 50 constructs of the kind
same_string(s, "")
to test for empty or null strings. Replace them by the new helper
function empty_string().
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Make functions in core/file.c, core/parse.c and core/import-csv.c
that were not used outside their translation unit of static linkage.
parse_date is moved from core/file.c to core/import-csv.c, since it
is used only there.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The function isCloudUrl() was only called in one place, parse_file().
But, isCloudUrl() could only return true if the filename was of the
git-repository kind (url[branch]). In such a case, control flow would
never reach the point where isCloudUrl() is called, since
is_git_repository() returns non-NULL and the function returns early.
Therefore, remove this function. Moreover, adapt the affected if-statement
by replacing "str && !strcmp(str, ...)" with the more concise
"same_string(str, ...)".
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
When I unified the sample pressures in commit 11a0c0cc70 ("Unify
sample pressure and o2pressure as pressure[2] array") I did all the
obvious conversions, including the conversion of the Poseidon txt file
import:
case POSEIDON_PRESSURE:
- sample->cylinderpressure.mbar = lrint(val * 1000);
+ sample->pressure[0].mbar = lrint(val * 1000);
break;
case POSEIDON_O2CYLINDER:
- sample->o2cylinderpressure.mbar = lrint(val * 1000);
+ sample->pressure[1].mbar = lrint(val * 1000);
break;
which was ObviouslyCorrect(tm).
But as so often is the case, obvious doesn't actually exist. The old
"o2cylinderpressure[]" model had an implicit sensor associated with it,
and that implicit sensor mapping wasn't obvious, and didn't get fixed.
It turns out that the way the Poseidon sensor mapping works, the O2
cylinder is cylinder 0, and the diluent cylinder is cylinder 1, so just
use the add_sample_pressure() helper to set both sensor index and
pressure value.
And since we now do all the sensor indexing right, we can also get rid
of some manual cylinder sample pressure code, because the generic dive
fixup will just DTRT. It used to screw up because the diluent sensor
number was wrong before, and the import code tried to work around that
by hand.
Reported-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We currently carry two pressures around for all the samples and plot
info, but the second pressure is reserved for CCR dives as the O2
cylinder pressure.
That's kind of annoying when we *could* use it for regular sidemount
dives as the secondary pressure.
So start prepping for that instead: don't make it "pressure" and
"o2pressure", make it just be an array of two pressure values.
NOTE! This is purely mindless prepwork. It literally just does a
search-and-replace, keeping the exact same semantics, so "pressure[1]"
is still just O2 pressure.
But at some future date, we can now start using it for a second sensor
value for sidemount instead.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Datatrak import is called from parse_file() in file.c. This function
reads the full file to be imported into a memblock structure. It's
easier and more secure, to parse this buffer instead of the file itself.
These are the necessary changes in function datatrak_import()
declaration and call.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Apparently the refactoring changed these values to be returned directly
in seconds. Not sure why, but luckily we have test cases that discovered
the change.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Moving the GUI independent Seabear import functionality to Subsurface
core. This will allow Robert to call it directly from download from DC.
Tested with H3 against released and daily versions of Subsurface. The
result differs somewhat, but it is actually fixing 2 bugs:
- Temperature was mis-interpreted previously
- Sample interval for a dive with 1 second interval was parsed
incorrectly
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Not using lrint(f) when converting double/float to int
creates rounding errors.
This error was detected by TestParse::testParseDM4 failure
on Windows. It was creating rounding inconsistencies
on Linux too, see change in TestDiveDM4.xml.
Enable -Wfloat-conversion for gcc version greater than 4.9.0
Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
This will parse the date and time information on CSV import if the file
name matches the one used by APD log viewer (date and time are available
in the file name). Hard coding the year to 20?? is a bit unfortunate,
but as there is only 2 digits in the year, we have to invent something.
And it would be quite optimistic to assume this will bite us back any
time soon :D
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>