The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The code assumes that prefs.cloud_base_url is non NULL. Allowing that to
be NULL makes no sense during normal operation of the app. Yet, most of
the tests don't initialize the prefs at all.
Making things worse, if we do correctly initialize the prefs (so as to
reasonably approximate the behavior when running the app), things break
because some of the reference outputs assume that the prefs are unset.
This deserves fixing.
For now, simply make sure that cloud_base_url is set for all the tests
that try to parse files.
Additionally, the semantics how cloud_base_url is saved to disk have
changed, so adjust the test for those prefs accordingly.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we want to avoid the parsers to directly modify global data,
we have to provide a device_table to parse into. This adds such
a state and the corresponding function parameters. However,
for now this is unused.
Adding new parameters is very painful and this commit shows that
we urgently need a "struct divelog" collecting all those tables!
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This is a bit painful: since we don't want to modify the filter
presets when the user imports (as opposed to opens) a log,
we have to provide a table where the parser stores the presets.
Calling the parser is getting quite unwieldy, since many tables
are passed. We probably should introduce a structure representing
a full log-book at one point, which collects all the things that
are saved to the log.
Apart from that, this is simply the counterpart to saving to XML.
The interpretation of the string data is performed by core
functions, not the parser itself to avoid code duplication with
the git parser.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
If we want to make addition of pictures undoable, then create_picture()
must not add directly to the dive. Instead, return the dive to which the
picture should be added and let the caller perform the addition.
This means that the picture-test has to be adapted.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
For consistency with equipment, use our table macros for pictures.
Generally tables (arrays) are preferred over linked lists, because
they allow random access.
This is mostly copy & paste of the equipment code.
Sadly, our table macros are quite messy and need some revamping.
Therefore, the resulting code is likewise somewhat messy.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Move the declarations of the "report_error()" and "set_error_cb()"
functions and the "verbose" variable to errorhelper.h.
Thus, error-reporting translation units don't have to import the
big dive.h header file.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
These functions were spread out over dive.c and divelist.c.
Move them into their own file to make all this a bit less monolithic.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Add a dive site table to each dive site to keep track of dives
that have been added to a dive site. Add two functions to add
dives to / remove dives from dive sites.
Since dive sites now contain a dive table, the order of includes
had to be changed: "divesite.h" now includes "dive.h" and not
vice-versa. This caused some include churn.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To extend the undo system to dive sites, the importers and downloaders
must not parse directly into the global dive site table. Instead,
pass a dive_site_table argument to parse into.
For now, always pass the global dive_site_table so that this commit
should not cause any functional change.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
To allow parsing into arbitrary trip_tables, add the corresponding
parameter to the parsing functions and the parser state. Currently,
all callers pass the global trip_table so there should be no change
in functionality. These arguments will be replaced in subsequent commits.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Currently, when selecting "Load media files even if time does not
match the dive time", the media are added to *all* selected dives.
Instead add it to the closest dive.
This seems like the less surprising behavior. Of course now if the
user really wants to add a media file to multiple dives, they will
have to do it manually.
To avoid a messy interface, this is solved by moving the iterate-
over-selected-dives loop to the core. Thus, a helper-function can
be made local to its translation unit.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Instead of having people treat latitude and longitude as separate
things, just add a 'location_t' data structure that contains both.
Almost all cases want to always act on them together.
This is really just prep-work for adding a few more locations that we
track: I want to add a entry/exit location to each dive (independent of
the dive site) because of how the Garmin Descent gives us the
information (and hopefully, some day, other dive computers too).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
To enable undo of divelog-importing it is crucial that parse_file()
can parse into arbitrary dive tables.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
In the last commits, the canonical-to-local filename map was made
independent from the image hashes and the location of moved images
was based on filename not hashes. The hashes are now in principle
unused (except for conversion of old-style local filename lookups).
Therefore, remove the hashes in this commit. This makes addition
of images distinctly faster.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The connection canonical filename to local filename was done via
two maps:
1) canonical filename -> hash
2) hash -> local filename
But the local filename was always queried from the canonical filename.
Therefore, directly index the former with the latter.
On startup, convert the old map to the new one.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
The hash field in the picture-structure was in principle non-operational.
It was set on loading, but never actually changed. The authoritative
hash comes from the filename->hash map.
Therefore, make this explicit by removing the hash field from the
picture structure.
Instead of filling the picture structure on loading, add the
hash directly to the filename->hash map. This is done in the
register_hash() function, which does not overwrite old entries.
I.e. the local hash has priority over the save-file. This
policy might be refined in the future.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
learnHash() was always called in conjunction with add_hash(). The
pattern was that a local filename and a hash were connected in
the hash-to-filename and the filename-to-hash maps. Then, the
original picture-filename or url were registered in the filename-to-hash
map.
This commit changes learnHash() to take three parameters (original-filename,
local-filename and hash) and do all of the above. The new code is
simpler because no dummy picture struct has to be generated in
DiveListView::loadImageFromURL().
The tests were extended to check for all hash<->filename associations.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
learnHash() is called either on a local picture structure
[DiveListView::loadImageFromURL()] or on a cloned picture structure
[ImageDownloader::saveImage()]. In neither case the picture structure
is passed to the frontend. Therefore, storing the new hash in the
picture struct is not necessary.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Add one more picture to the already existing test.
This new picture is a JPEG and has data after JFIF EOI tag.
Suggested-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Stefan Fuchs <sfuchs@gmx.de>
Signed-off-by: Stefan Fuchs <sfuchs@gmx.de>
Update tests with a (compile time) option SUBSURFACE_TEST_DATA,
pointing to test data base path. It is needed for cross compilation cases.
SUBSURFACE_TEST_DATA is set to SUBSURFACE_SOURCE by default,
or configurable via cmake option -DSUBSURFACE_TEST_DATA="...".
Signed-off-by: Jeremie Guichard <djebrest@gmail.com>