The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
If we want to avoid the parsers to directly modify global data,
we have to provide a device_table to parse into. This adds such
a state and the corresponding function parameters. However,
for now this is unused.
Adding new parameters is very painful and this commit shows that
we urgently need a "struct divelog" collecting all those tables!
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This is a bit painful: since we don't want to modify the filter
presets when the user imports (as opposed to opens) a log,
we have to provide a table where the parser stores the presets.
Calling the parser is getting quite unwieldy, since many tables
are passed. We probably should introduce a structure representing
a full log-book at one point, which collects all the things that
are saved to the log.
Apart from that, this is simply the counterpart to saving to XML.
The interpretation of the string data is performed by core
functions, not the parser itself to avoid code duplication with
the git parser.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Instead of using the Subsurface-divelog user on GitHub, we now use an org that
was generously donated to us.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
These functions were spread out over dive.c and divelist.c.
Move them into their own file to make all this a bit less monolithic.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Add a dive site table to each dive site to keep track of dives
that have been added to a dive site. Add two functions to add
dives to / remove dives from dive sites.
Since dive sites now contain a dive table, the order of includes
had to be changed: "divesite.h" now includes "dive.h" and not
vice-versa. This caused some include churn.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Move the declaration of these functions to "file.h" and "parse.h"
according to the translation unit they are defined in. Thus, not
all users of "dive.h" have to suck in "sqlite3.h".
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To extend the undo system to dive sites, the importers and downloaders
must not parse directly into the global dive site table. Instead,
pass a dive_site_table argument to parse into.
For now, always pass the global dive_site_table so that this commit
should not cause any functional change.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To allow parsing into arbitrary trip_tables, add the corresponding
parameter to the parsing functions and the parser state. Currently,
all callers pass the global trip_table so there should be no change
in functionality. These arguments will be replaced in subsequent commits.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This requires the user to manually copy the large test file into the
dives directory (because I really don't want to add this to the repo);
instructions how to do that are displayed if the file is missing.
Next it uses the git version of that same file (but prefetches it to
try and remove the network speed from what is being measured).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>