2017-04-27 18:18:03 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2015-06-13 15:01:06 +00:00
|
|
|
#ifndef GITACCESS_H
|
|
|
|
#define GITACCESS_H
|
|
|
|
|
2015-06-13 20:16:08 +00:00
|
|
|
#include "git2.h"
|
2020-06-20 16:15:50 +00:00
|
|
|
#include "filterpreset.h"
|
2015-06-13 20:16:08 +00:00
|
|
|
|
core: introduce divelog structure
The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2022-11-08 20:31:08 +00:00
|
|
|
struct dive_log;
|
2020-10-25 12:32:51 +00:00
|
|
|
|
2015-06-13 15:01:06 +00:00
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#else
|
|
|
|
#include <stdbool.h>
|
|
|
|
#endif
|
|
|
|
|
cloudstorage: try to pick between multiple cloud servers
The backend infrastructure will soon be able to support more than one
cloud server which automagically stay in sync with each other.
One critical requirement for that to work is that once a session was
started with one of the servers, the complete session happens with that
server - we must not switch from server to server while doing a git
transaction. To make sure that's the case, we aren't trying to use DNS
tricks to make this load balancing scheme work, but instead try to
determine at program start which server is the best one to use.
Right now this is super simplistic. Two servers, one in the US, one in
Europe. By default we use the European server (most of our users appear
to be in Europe), but if we can figure out that the client is actually
in the Americas, use the US server. We might improve that heuristic over
time, but as a first attempt it seems not entirely bogus.
The way this is implemented is a simple combination of two free
webservices that together appear to give us a very reliable estimate
which continent the user is located on.
api.ipify.org gives us our external IP address
ip-api.com gives us the continent that IP address is on
If any of this fails or takes too long to respond, we simply ignore it
since either server will work. One oddity is that if we decide to change
servers we only change the settings that are stored on disk, not the
runtime preferences. This goes back to the comment above that we have to
avoid changing servers in mid sync.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2021-04-11 01:03:08 +00:00
|
|
|
#define CLOUD_HOST_US "ssrf-cloud-us.subsurface-divelog.org"
|
2023-12-27 21:54:50 +00:00
|
|
|
#define CLOUD_HOST_U2 "ssrf-cloud-u2.subsurface-divelog.org"
|
cloudstorage: try to pick between multiple cloud servers
The backend infrastructure will soon be able to support more than one
cloud server which automagically stay in sync with each other.
One critical requirement for that to work is that once a session was
started with one of the servers, the complete session happens with that
server - we must not switch from server to server while doing a git
transaction. To make sure that's the case, we aren't trying to use DNS
tricks to make this load balancing scheme work, but instead try to
determine at program start which server is the best one to use.
Right now this is super simplistic. Two servers, one in the US, one in
Europe. By default we use the European server (most of our users appear
to be in Europe), but if we can figure out that the client is actually
in the Americas, use the US server. We might improve that heuristic over
time, but as a first attempt it seems not entirely bogus.
The way this is implemented is a simple combination of two free
webservices that together appear to give us a very reliable estimate
which continent the user is located on.
api.ipify.org gives us our external IP address
ip-api.com gives us the continent that IP address is on
If any of this fails or takes too long to respond, we simply ignore it
since either server will work. One oddity is that if we decide to change
servers we only change the settings that are stored on disk, not the
runtime preferences. This goes back to the comment above that we have to
avoid changing servers in mid sync.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2021-04-11 01:03:08 +00:00
|
|
|
#define CLOUD_HOST_EU "ssrf-cloud-eu.subsurface-divelog.org"
|
2023-12-27 21:54:50 +00:00
|
|
|
#define CLOUD_HOST_E2 "ssrf-cloud-e2.subsurface-divelog.org"
|
cloudstorage: try to pick between multiple cloud servers
The backend infrastructure will soon be able to support more than one
cloud server which automagically stay in sync with each other.
One critical requirement for that to work is that once a session was
started with one of the servers, the complete session happens with that
server - we must not switch from server to server while doing a git
transaction. To make sure that's the case, we aren't trying to use DNS
tricks to make this load balancing scheme work, but instead try to
determine at program start which server is the best one to use.
Right now this is super simplistic. Two servers, one in the US, one in
Europe. By default we use the European server (most of our users appear
to be in Europe), but if we can figure out that the client is actually
in the Americas, use the US server. We might improve that heuristic over
time, but as a first attempt it seems not entirely bogus.
The way this is implemented is a simple combination of two free
webservices that together appear to give us a very reliable estimate
which continent the user is located on.
api.ipify.org gives us our external IP address
ip-api.com gives us the continent that IP address is on
If any of this fails or takes too long to respond, we simply ignore it
since either server will work. One oddity is that if we decide to change
servers we only change the settings that are stored on disk, not the
runtime preferences. This goes back to the comment above that we have to
avoid changing servers in mid sync.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2021-04-11 01:03:08 +00:00
|
|
|
#define CLOUD_HOST_PATTERN "ssrf-cloud-..\\.subsurface-divelog\\.org"
|
|
|
|
#define CLOUD_HOST_GENERIC "cloud.subsurface-divelog.org"
|
|
|
|
|
2022-04-13 16:43:37 +00:00
|
|
|
enum remote_transport { RT_LOCAL, RT_HTTPS, RT_SSH, RT_OTHER };
|
2015-06-13 15:01:06 +00:00
|
|
|
|
|
|
|
struct git_oid;
|
|
|
|
struct git_repository;
|
core: introduce divelog structure
The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2022-11-08 20:31:08 +00:00
|
|
|
struct divelog;
|
2022-04-13 16:43:37 +00:00
|
|
|
|
|
|
|
struct git_info {
|
|
|
|
const char *url;
|
|
|
|
const char *branch;
|
|
|
|
const char *username;
|
|
|
|
const char *localdir;
|
|
|
|
struct git_repository *repo;
|
|
|
|
unsigned is_subsurface_cloud:1;
|
|
|
|
enum remote_transport transport;
|
|
|
|
};
|
|
|
|
|
|
|
|
extern bool is_git_repository(const char *filename, struct git_info *info);
|
|
|
|
extern bool open_git_repository(struct git_info *info);
|
2022-04-18 21:37:55 +00:00
|
|
|
extern bool remote_repo_uptodate(const char *filename, struct git_info *info);
|
2022-04-13 16:43:37 +00:00
|
|
|
extern int sync_with_remote(struct git_info *);
|
|
|
|
extern int git_save_dives(struct git_info *, bool select_only);
|
core: introduce divelog structure
The parser API was very annoying, as a number of tables
to-be-filled were passed in as pointers. The goal of this
commit is to collect all these tables in a single struct.
This should make it (more or less) clear what is actually
written into the divelog files.
Moreover, it should now be rather easy to search for
instances, where the global logfile is accessed (and it
turns out that there are many!).
The divelog struct does not contain the tables as substructs,
but only collects pointers. The idea is that the "divelog.h"
file can be included without all the other files describing
the numerous tables.
To make it easier to use from C++ parts of the code, the
struct implements a constructor and a destructor. Sadly,
we can't use smart pointers, since the pointers are accessed
from C code. Therfore the constructor and destructor are
quite complex.
The whole commit is large, but was mostly an automatic
conversion.
One oddity of note: the divelog structure also contains
the "autogroup" flag, since that is saved in the divelog.
This actually fixes a bug: Before, when importing dives
from a different log, the autogroup flag was overwritten.
This was probably not intended and does not happen anymore.
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2022-11-08 20:31:08 +00:00
|
|
|
extern int git_load_dives(struct git_info *, struct divelog *log);
|
2015-12-27 17:56:27 +00:00
|
|
|
extern const char *get_sha(git_repository *repo, const char *branch);
|
2022-04-13 16:43:37 +00:00
|
|
|
extern int do_git_save(struct git_info *, bool select_only, bool create_empty);
|
2022-04-18 21:36:00 +00:00
|
|
|
extern void cleanup_git_info(struct git_info *);
|
2015-06-13 15:01:06 +00:00
|
|
|
extern const char *saved_git_id;
|
2018-09-10 13:30:01 +00:00
|
|
|
extern bool git_local_only;
|
2020-06-13 20:41:09 +00:00
|
|
|
extern bool git_remote_sync_successful;
|
2015-06-13 15:01:06 +00:00
|
|
|
extern void clear_git_id(void);
|
|
|
|
extern void set_git_id(const struct git_oid *);
|
2017-06-18 06:22:37 +00:00
|
|
|
void set_git_update_cb(int(*)(const char *));
|
|
|
|
int git_storage_update_progress(const char *text);
|
2022-04-13 16:43:37 +00:00
|
|
|
char *get_local_dir(const char *, const char *);
|
2016-04-22 14:00:29 +00:00
|
|
|
int git_create_local_repo(const char *filename);
|
2020-04-10 00:05:55 +00:00
|
|
|
int get_authorship(git_repository *repo, git_signature **authorp);
|
2016-04-04 00:26:05 +00:00
|
|
|
|
2015-06-13 15:01:06 +00:00
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#endif // GITACCESS_H
|
|
|
|
|