Commit graph

30 commits

Author SHA1 Message Date
Dirk Hohndel
72bdd506f0 tests: initialize our prefs to the default values
Otherwise we end up with nonsensical values which lead to a division by zero,
which in return leads to different results, depending on platform.

Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2021-07-23 11:30:17 -07:00
Dirk Hohndel
fe074ccad1 cloudstorage: adapt tests
The code assumes that prefs.cloud_base_url is non NULL. Allowing that to
be NULL makes no sense during normal operation of the app. Yet, most of
the tests don't initialize the prefs at all.

Making things worse, if we do correctly initialize the prefs (so as to
reasonably approximate the behavior when running the app), things break
because some of the reference outputs assume that the prefs are unset.
This deserves fixing.

For now, simply make sure that cloud_base_url is set for all the tests
that try to parse files.

Additionally, the semantics how cloud_base_url is saved to disk have
changed, so adjust the test for those prefs accordingly.

Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2021-04-19 12:51:01 -07:00
Berthold Stoeger
39a4090c0a devices: add devices in Command::importTable()
Add a device_table parameters to Command::importTable() and
add_imported_dives(). The content of this table will be added
to the global device list (respectively removed on undo).

This is currently a no-op, as the parser doesn't yet fill
out the device table, but adds devices directly to the global
device table.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2020-10-24 09:51:37 -07:00
Berthold Stoeger
a261466594 parser: add device_table to parser state
If we want to avoid the parsers to directly modify global data,
we have to provide a device_table to parse into. This adds such
a state and the corresponding function parameters. However,
for now this is unused.

Adding new parameters is very painful and this commit shows that
we urgently need a "struct divelog" collecting all those tables!

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2020-10-24 09:51:37 -07:00
Berthold Stoeger
a7bbb6c1cc filter: remove filter_preset_table_t
We used a typedef "filter_preset_table_t" for the filter preset table,
because it is a "std::vector<filter_preset>". However, that is in
contrast to all the other global tables (dives, trips, sites) that we
have.

Therefore, turn this into a standard struct, which simply inherits
from "std::vector<filter_preset>". Note that while inheriting from
std::vector<> is generally not recommended, it is not a problem
here, because we don't modify it in any shape or form.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2020-10-17 09:04:20 -07:00
Berthold Stoeger
41cf83583d filter: load filter presets from XML files
This is a bit painful: since we don't want to modify the filter
presets when the user imports (as opposed to opens) a log,
we have to provide a table where the parser stores the presets.
Calling the parser is getting quite unwieldy, since many tables
are passed. We probably should introduce a structure representing
a full log-book at one point, which collects all the things that
are saved to the log.

Apart from that, this is simply the counterpart to saving to XML.
The interpretation of the string data is performed by core
functions, not the parser itself to avoid code duplication with
the git parser.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2020-09-29 16:13:03 -07:00
Berthold Stoeger
95284c026e cleanup: move dive_table from dive.h to divelist.h
This allows us to decouple dive.h and divelist.h, a small step in
include disentangling.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2020-05-01 09:42:31 -07:00
Dirk Hohndel
e1cd055111 code cleanup: add empty table structures
It seemed to make sense to combine all three types in one commit.

Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2020-01-10 02:37:03 +09:00
Berthold Stoeger
7f4d9db962 Cleanup: move trip-related functions into own translation unit
These functions were spread out over dive.c and divelist.c.
Move them into their own file to make all this a bit less monolithic.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-06-19 13:11:10 -07:00
Berthold Stoeger
858d3e2eed Dive site: fix merging tests
The handling of dive site merging changed and therefore the tests
have to be adapted.

1) Dive sites are recognized as identical based on their name.
   Therefore, give the dive sites that should be merged the same name.
2) The dive site id of the first imported dive is kept. Thus,
   merge and reverse merge produce two different output files.
   Create a second file reflecting that fact.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-04-12 18:19:07 +03:00
Berthold Stoeger
c22fd9f4fd Dive sites: prepare for dive site ref-counting
Add a dive site table to each dive site to keep track of dives
that have been added to a dive site. Add two functions to add
dives to / remove dives from dive sites.

Since dive sites now contain a dive table, the order of includes
had to be changed: "divesite.h" now includes "dive.h" and not
vice-versa. This caused some include churn.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-04-12 18:19:07 +03:00
Berthold Stoeger
82af1b2377 Undo: make undo-system dive site-aware
As opposed to dive trips, dive sites were always directly added
to the global table, even on import. Instead, parse the divesites
into a distinct table and merge them on import.

Currently, this does not do any merging of dive sites, i.e. dive
sites are considered as either equal or different. Nevertheless,
merging of data should be rather easy to implement and simply
follow the code of the dive merging.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-04-12 18:19:07 +03:00
Berthold Stoeger
37146c5742 Parser: parse into custom dive site table
To extend the undo system to dive sites, the importers and downloaders
must not parse directly into the global dive site table. Instead,
pass a dive_site_table argument to parse into.

For now, always pass the global dive_site_table so that this commit
should not cause any functional change.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-04-12 18:19:07 +03:00
Berthold Stoeger
891fcbf520 Import: control process_imported_dives() by flags
process_imported_dives() takes four boolean parameters. Replace these
by flags. This makes the function calls much more descriptive. Morover,
it becomes easier to add or remove flags.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-19 13:48:17 -08:00
Berthold Stoeger
1cd0863cca Import: add add_to_new_trip flag to process_imported_dives()
If this flag is set, dives that are not assigned to a trip will
be assigned to a new trip. This flag is set if the user checked
"add to new trip" in the download dialog of the desktop version.

Currently this is a no-op as the dives will already have been
added to a new trip by the downloading code. This will be removed
in a subsequent commit.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-19 13:48:17 -08:00
Berthold Stoeger
0249e12589 Import: split process_imported_dives() function
Split the process_imported_dives() function in two:
1) process_imported_dives() processes the dives and generates
   a list of dives and trips to be added and removed.
2) add_imported_dives() calls process_imported_dives() and
   does the actual removal / addition of dives and trips.

The goal is to split preparation and actual work, to
make dive import undo-able.

The code adds extra checks to never merge into the same
dive twice, as this would lead to a double-free() bug.
This should in principle never happen, as dives that
compare equal according to is_same_dive() are merged
in the imported-dives list, but perhaps in some pathologival
corner-cases is_same_dive() turns out to be non-transitive.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-09 20:58:04 -08:00
Berthold Stoeger
0dfc59f38c Import: add merge_all_trips parameter to process_imported_dives()
When importing log-files we generally want to merge trips. But
when downloading and the user chose "generate new trip", that
new trip should not be merged into existing trips.

Therefore, add a "merge_all_trips" parameter to process_imported_dives().
If false only autogenerated trips [via autogroup] will be merged.
In the future we might want to let the user choose if trips
should be merged when importing log-files.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-09 20:58:04 -08:00
Berthold Stoeger
1593f2ebad Import: merge dives trip-wise
The old way of merging log-files was not well defined: Trips
were recognized as the same if and only if the first dives
started at the same instant. Later dives did not matter.

Change this to merge dives if they are overlapping.
Moreover, on parsing and download generate trips in a separate
trip-table.

This will be fundamental for undo of dive-import: Firstly, we
don't want to mix trips of imported and not-yet imported dives.
Secondly, by merging trip-wise, we can autogroup the dives
in the import-data to trips and merge these at once. This will
simplify the code to decide to which trip dives should be
autogrouped.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-09 20:58:04 -08:00
Berthold Stoeger
7e33369dc8 Parser: add trip_table parameter to parsing functions
To allow parsing into arbitrary trip_tables, add the corresponding
parameter to the parsing functions and the parser state. Currently,
all callers pass the global trip_table so there should be no change
in functionality. These arguments will be replaced in subsequent commits.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2019-01-09 20:58:04 -08:00
Berthold Stoeger
6dc1dcaea5 Import: pass "downloaded" parameter to process_imported_dives()
process_imported_dives() is more efficient for downloaded than for
imported (from a file) dives, because it checks only the divecomputer
of the first dive.

This condition is checked via the "downloaded" flag of the first
dive. Instead, pass an argument to process_imported_dives().

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2018-10-06 19:47:06 -07:00
Berthold Stoeger
810903bdb9 Import: pass a dive table to process_imported_dives()
Dives were directly imported into the global dive table and then
merged in process_imported_dives(). Make this interface more flexible,
by passing an independent dive table.

The dive table of the to-be-imported dives will be sorted and merged.
Then each dive is inserted in a one-by-one manner to into the global
dive table.

This actually introduces (at least) two functional changes:
1) If a new dive spans two old dives, it will only be merged to the
   first dive. But this seems like a pathological case, which is of
   dubious value anyway.
2) Dives unrelated to the import will not be merged. The old code
   would happily merge dives that were not even close to the
   newly imported dives. A surprising behavior.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2018-10-06 19:47:06 -07:00
Berthold Stoeger
35b8a4f404 Core: split process_dives() in post-import and post-load versions
process_dives() is used to post-process the dive table after loading
or importing. The first parameter states whether this was after
load or import.

Especially in the light of undo, load and import are fundamentally
different things. Notably, that latter should be undo-able, whereas
the former is not. Therefore, as a first step to make import undo-able,
split the function in two versions and remove the first parameter.

It turns out the the load-version is very light. It only sets the
DC nicknames and sorts the dive-table. There seems to be no reason
to merge dives.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2018-09-23 11:50:53 -07:00
Berthold Stoeger
d815e0c947 Parse: pass dive_table argument to parse_file()
To enable undo of divelog-importing it is crucial that parse_file()
can parse into arbitrary dive tables.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2018-08-23 10:17:12 -07:00
Dirk Hohndel
afe7e847d6 Whitespace cleanup tests
Again, entirely script based.

Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2018-07-26 16:32:51 +03:00
Dirk Hohndel
ee1bf18189 Add SPDX header to tests
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2017-04-29 13:32:55 -07:00
Jeremie Guichard
5caa9b23fe Use QTest cleanup method for proper test shutdown
In case of QCOMPARE failure, code following the comparison
is not executed, this results in application state not being
properly resorted and often gives several test failures,
when only one test really fails.
Using QTest cleanup method allows restoring proper state,
before next test is executed.

Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
2017-03-04 12:03:33 -08:00
Jeremie Guichard
7329629b6e Use proper order in QCOMPARE arguments
Expected value is the second argument of QCOMPARE,
having the arguments in the right order avoid confusion
when looking at error message in case of test failure.

Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
2017-03-04 12:03:33 -08:00
Jeremie Guichard
f28aab7bd9 Fix trailing '\r' test failure on Windows
Windows implementation of fwrite changes \n to \r\n
for files opened in text mode.
It caused failures in TestMerge and TestParse when
comparing written files against reference data.

Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
2017-02-25 09:24:23 -08:00
Jeremie Guichard
1e2580c3fd Use SUBSURFACE_TEST_DATA definition to point to test data dir
Update tests with a (compile time) option SUBSURFACE_TEST_DATA,
pointing to test data base path. It is needed for cross compilation cases.
SUBSURFACE_TEST_DATA is set to SUBSURFACE_SOURCE by default,
or configurable via cmake option -DSUBSURFACE_TEST_DATA="...".

Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
2017-02-24 01:10:22 -08:00
Dirk Hohndel
3fef6ec31d Simple test case for merging dives
We do some merging in a couple of the other tests as well, but the idea
is to have specific test cases that exercise our merge logic.

This one starts simple. Merge a dive with some valid info with a second
one that has less data filled. And then try it in both possible orders.

It shows a few potential problems.

Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2017-02-21 18:22:56 -08:00