The old test was broken in many ways and kept failing for a number of
reasons. Some of them were addressed in the previous commits (the
missing HEAD ref being the main one), the other one was that the tests
kept stepping on top of each other - as were potentially random users or
reviewers using the 'universal' test account.
This uses a random one of ten dedicated test accounts, and on top of
that uses a random branch name (instead of the fixed email address
associated with the account).
This also rewrites several of the tests dealing with offline changes to
correctly model going offline (but keeps one that directly writes to the
local cache).
Finally this change also tries to do a much better job cleaning up after
itself and not leaving data behind the the next run could stumble over.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In case of QCOMPARE failure, code following the comparison
is not executed, this results in application state not being
properly resorted and often gives several test failures,
when only one test really fails.
Using QTest cleanup method allows restoring proper state,
before next test is executed.
Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
Method originally called testSetup is more a precondition
to test execution rather than an actual test.
QTest recommends to use initTestCase for that purpose.
Signed-off-by: Jeremie Guichard <djebrest@gmail.com>
- We add a dive while offline.
- On a different computer (here simulated by a different local cache) we
add a different file.
- Now we go back to the previous local cache (the one where we added a
different dive in the first step) and take that online (i.e., connect to
cloud storage). Now both of the new dives should have been added to our
data file.
This is a rather trivial test with no conflict and a straight forward
merge. We need to add a lot more test cases to make sure this works as
expected and doesn't leave the user with a corrupt state.
Ideally whatever happens, the user should never see an error...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
All this really does is make sure that the fast forward works if the local
cache has received updates that haven't made it to the server, yet.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This just makes sure that writing data to git storage and reading it back
gives you the same result. Without the fixed generation of initial dive
site UUIDs this fails.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>