This is not a great way to load-balance, but it works and doesn't require
high end hardware on the backend.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We had various random "free parts of the git info" left-overs from when
we passed down the git repo data ad-hoc. Get rid of it, and replace it
with just doing a 'cleanup_git_info()' that does the final cleanup of it
all.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
That function name was incomprehensible. What did it check? And what
did the return value mean?
So let's rename it to something that actually describes what it does,
and reverse the meaning of the return value while at it.
So now it's called 'remote_repo_uptodate()', and it returns true if the
remote repository branch has the same value as our 'saved_git_id'.
It's still a bit obscure, but at least within the context of the only
user, the code now makes _more_ sense than it used to:
if (remote_repo_uptodate(fileNamePrt.data(), &info)) {
appendTextToLog("Cloud sync shows local cache was current");
but maybe we could come up with even better semantics and naming, and
make it even clearer.
Requested-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We have this nasty habit of randomly passing down all the different
things that we use to look up the local and remote git repository, and
the information associated with it.
Start collecting the data into a 'struct git_info' instead, so that it
is easier to manage, and easier and more logical to just look up
different parts of the puzzle.
This is a fairly mechanical conversion, but has moved all the basic
information collection to the 'is_git_repository()' function. That
function no longer actually opens the repository (so the 'dry_run'
argument is gone, and instead a successful 'is_git_repository()' is
followed by 'opn_git_repository()' if you actually want the old
non-dry_run semantics.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move the ARRAY_SIZE macro into a header file and use it to determine the
number of cloud servers that we need to check.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we can't reach our preferred server, try using a different one.
The diff makes more sense when ignoring white space.
With this we check the connection to the cloud server much earlier and
in case of failure to connect try a different cloud_base_url.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The backend infrastructure will soon be able to support more than one
cloud server which automagically stay in sync with each other.
One critical requirement for that to work is that once a session was
started with one of the servers, the complete session happens with that
server - we must not switch from server to server while doing a git
transaction. To make sure that's the case, we aren't trying to use DNS
tricks to make this load balancing scheme work, but instead try to
determine at program start which server is the best one to use.
Right now this is super simplistic. Two servers, one in the US, one in
Europe. By default we use the European server (most of our users appear
to be in Europe), but if we can figure out that the client is actually
in the Americas, use the US server. We might improve that heuristic over
time, but as a first attempt it seems not entirely bogus.
The way this is implemented is a simple combination of two free
webservices that together appear to give us a very reliable estimate
which continent the user is located on.
api.ipify.org gives us our external IP address
ip-api.com gives us the continent that IP address is on
If any of this fails or takes too long to respond, we simply ignore it
since either server will work. One oddity is that if we decide to change
servers we only change the settings that are stored on disk, not the
runtime preferences. This goes back to the comment above that we have to
avoid changing servers in mid sync.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
unregister_dive() and delete_single_dive are defined in
divelist.c, as they take an "index" argument into the global
divelist. Therefore, move their declarations to divelist.h.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
In analogy to the xml-parser add a device-table to git's parser-state.
Currently this is unused. In upcoming commits the git parser will
then be changed to add device nodes in this table instead of the
global device table. The long-term goal being to detach the
parsers from global state and to make dive-import fully undoable.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
We used a typedef "filter_preset_table_t" for the filter preset table,
because it is a "std::vector<filter_preset>". However, that is in
contrast to all the other global tables (dives, trips, sites) that we
have.
Therefore, turn this into a standard struct, which simply inherits
from "std::vector<filter_preset>". Note that while inheriting from
std::vector<> is generally not recommended, it is not a problem
here, because we don't modify it in any shape or form.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This is mostly copy and paste of other git loading code. Sadly,
it adds a lot of state to the parser-state. I wish we could pass
different parser states to the parser_* functions.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
This may seem like a bit heavy handed as it adds more global state, but given
the number of ways in which attempts to sync with the cloud can fail it seems
much more reliable to claim success in the spots where we actually know that we
have successfully synced with the remote server. Transporting that information
back through the various call chains turned out to be very disruptive and ugly,
so I went with global state instead.
Whenever we access cloud storage (or any git repo), we always first check if it
actually is a git repo by calling is_git_repository() - so this is the perfect
spot to initialize the variable to false.
And there are only two spots where we either clone the remote repo
(create_local_repo()) or update the remote with the (potentially merged) local
changes (check_remote_status()). So those are the two places where we set the
variable to true.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This never made sense and I think I just forgot to complete this code
when I first worked on it. Now we can see which version of Subsurface or
Subsurface-mobile created a merge.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The git parser loads into the global dive table, even if it
is called indirectly via parse_file(). However, parse_file()
may be given a different table. Fix this by extending the
git parser state.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
On saving to a remote git repository, the transport was set to https://,
which broke saving to ssh:// repositories. Instead determine the
transport from the remote url.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
Translate all of them, but also remove some redundant or possibly
misleading messages. These are now seen by users, not just developers
trying to debug the code.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The old system of cloud access updates with fake percentages just wasn't
helpful. Even worse, it hid a lot important information from the user.
This should be more useful (but it will require that we localize the
messages sent from the git progress notifications and make them more
'user ready').
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
So far we didn't do that at all, we either relied on the user manually
creating a local repo, or we cloned a remote repo.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We first check the sha to see if we want to load at all. But at that
point we already have the repository and the branch and we have synced
with the remote. So when we decide that we need to reload from storage,
we don't need to repeat those steps, instead we can go directly to the
git load.
For that to work we need to pass the repository pointer and the branch
name back to the caller so that we can directly call git_load_dives().
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Having subsurface-core as a directory name really messes with
autocomplete and is obviously redundant. Simmilarly, qt-mobile caused an
autocomplete conflict and also was inconsistent with the desktop-widget
name for the directory containing the "other" UI.
And while cleaning up the resulting change in the path name for include
files, I decided to clean up those even more to make them consistent
overall.
This could have been handled in more commits, but since this requires a
make clean before the build, it seemed more sensible to do it all in one.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2016-04-04 22:33:58 -07:00
Renamed from subsurface-core/git-access.h (Browse further)