Up to now, we only computed hashes of images upon actually displaying the images.
With this patch we start to compute hashes once we load the xml or from git. This
happens in the background, so the user should note an increased CPU load only once
per divelog.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Simply looking at the filename in the picture structure isn't enough (now,
arguably one might say that it should be and that that data structure
should be updated, but that's not how other parts of Subsurface have
implemented things so I don't want to break that assumption here).
So instead we look up where the pictures actually was loaded from and then
copy that file into the right location.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The interesting challenge here is what to do with the picture data stored
in the git repository. If the pictures are already in the file system (for
example because Subsurface is runnin on the same machine that this data
file was saved on) it would be silly to extract them again every time the
dive log is opened.
So instead we try to figure out if the pictures can be located and only
create local copies of them if that isn't the case.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Yes, this could easily done from the C code. But this seems just so much
easier and I don't have to worry about the oddities of Windows and all
that.
I'm lazy. So sue me.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The lower level functions will already report that things didn't connect
successfully, no reason to repeat it here (which then exposes the git
URL).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>