When analyzing the buffer that is handed to the first_object_id function
we carefully check to make sure that we don't read past the end of the
input buffer but there was still one code path that could have us do just
that.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When starting from the first dive on the dive computer we called getDive
for every dive starting with id 0 instead of figuring out which id is
actually the first one that we downloaded.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This actually makes us internally use 'micro-degrees' for latitude and
longitude, and we never turn them into floating point either at parse
time or save time.
That said, the Uemis downloader internally does still use atof() when
converting things, which is likely a bug (locale issues and all that),
but I'll ask Dirk to check it out.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is improving a bit more on commit d2dd0eb39efe "When starting with an
empty data file and downloading dives, number them" by providing a better
starting number when a user downloads dives from a Uemis SDA into an empty
data file.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This way individual pieces can be turned on and off.
The commit also adds code to read from a disk image (instead of the SDA)
without all the long timeouts.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Never make trivial changes without testing them. This was missung a '!'
before the strcmp - so the wrong code got executed when trying to get the
DeviceId and everything afterwards failed without a valid DeviceId.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The initial downloader reused the XML parsing of SDA files that was
implemented early in order to support the information extracted from the
SDA with the java applet. But creating this intermediary XML file and
handing it off to the XML import function always seemed like an ugly way
to do things. This became even more obvious when adding more features to
the Uemis downloader.
This commit completely changes the downloader to instead create dives and
record them directly.
This also adds support for divespots (which are stored in a seperate
database that needs to be queried after the divelog and dive entries have
been combined - the Uemis firmware clearly was written by monkeys on
crack - oh wait: I'm trusting these same people to get the deco right?).
This commit leaves the SDA import capability in the XML parser intact.
I'll remove that later. Because of this it actually adds a few lines of
code, but the overall change will be a substantial code deletion.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Actually, it's even better than that. Thanks to the new divecomputer
datastructure we can now simply look up in the dive_table which dives have
been downloaded from this specific Uemis SDA.
This patch removes the old gconf based code - which leads to one
unfortunate problem: the first time a Uemis SDA owner runs this version of
Subsurface against their data file ALL dives will be downloaded again
(which may not be a bad thing as we have improved a few other details of
Uemis support so now they get their deco information, surface pressure and
other data that we have started to support since 2.1). Still, this is not
ideal. But I didn't want to keep the legacy code around since this new
solution is so much cleaner.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The code pretended to support this for libdivecomputer based downloads,
but it had never been hooked up when the native Uemis downloader was
implemented. When I finally decided to close that feature gap I realized
that the original code was, shall we say, "aspirational" or "completely
bogus" and therefore never worked.
So instead of just hooking up the code for the Uemis downloader I instead
implemented this correctly for the first time for both libdivecomputer and
the native Uemis downloader.
In order not to have to mess with multithreaded Gtk development I simply
opted for a helper function that fires on a 100ms timeout and have it end
the dialog without a response. This way we can run the dialog while
waiting for the download to finish, still update the progress bar and
respond in a useful manner to the user clicking cancel.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
And again buffer_insert contained the blatant bug.
The code wasn't copying the trailing '\0' when extending the string, which
usually didn't end up blowing up the code (and therefore kept the bug
hidden until now) because of the way realloc reused memory - we just had
trailing garbage strings. But sometimes we weren't so lucky and the strlen
in a subsequent call of buffer_insert would run past the end of the
allocated buffer.
Oops.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We are accessing offset 24 in an array of length 24. To make things easier
for the base64 conversion we just treat this as an off by three error and
instead create an array large enough for 27 elements and convert a
sufficient number of base64 chars to initialize all of them.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
If we ran out of space on the Uemis SDA during download and the user
unmounted, unplugged and replugged the SDA, we need to take care to
correctly reset the file number we use for finding the correct ANS file.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Start every step with much longer timeouts (until we get the first
response back), but then use shorter timeouts once we have started
receiving data.
This uses up fewer of the ANS files and allows us to get more dives
downloaded before the SDA has to be unplugged to reset communications,
yet at the same time it still improves the overall download time.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This will hopefully not be something we need often, but if we improve
support for a divecomputer (either in libdivecomputer or in our native
Uemis code or even in the way we handle (and potentially discard) events),
then it is extremely useful to be able to say "re-download things
from the divecomputer and for things that were not edited in Subsurface,
don't try to merge the data (which gives BAD results if for example you
fixed a bug in the depth calculation in libdivecomputer) but instead
simply take the samples, the events and some of the other unedited data
straight from the download".
This commit implements just that - a "force download" checkbox in the
download dialog that makes us reimport all dives from the dive computer,
even the ones we already have, and an "always prefer downloaded dive"
checkbox that then tells Subsurface not to merge but simply to take the
data from the downloaded dive - without overwriting the things we have
already edited in Subsurface (like location, buddy, equipment, etc).
This, as a precaution, refuses to merge dives that don't have identical
start times. So if you have edited the date / time of a dive or if you
have previously merged your dive with a different dive computer (and
therefore modified samples and events) you are out of luck.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When the user closes the divelist and starts with an empty file it makes
no sense to assume that she only wants to download new dives since the
last time dives have been downloaded. So if the current divelist is empty
we ignore that information and start from the beginning again.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
With this commit we not only use the getDivelogs command but also the
getDive command for each of the dives that was downloaded. Oddly, that
makes quite a bit of redundant (and at times slightly contradictory) data
available, but also many new things.
We now get weight, suit and notes that were stored with a dive in the
logbook on the divecomputer. There are a ton more data available that we
don't use, yet. For example information about altitude, a decoindex, dive
type and dive activity, other equipment information, etc.
I still need to decide how much of this I want to make available in
Subsurface (and how I want to present this - after all most of this is not
available from other dive computers).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is just the first step - convert the string literals, try to catch
all the places where this isn't possible and the program needs to convert
string constants at runtime (those are the N_ macros).
Add a very rough first German localization so I can at least test what I
have done. Seriously, I have never used a localized OS, so I am certain
that I have many of the 'standard' translations wrong. Someone please take
over :-)
Major issues with this:
- right now it hardcodes the search path for the message catalog to be
./locale - that's of course bogus, but it works well while doing initial
testing. Once the tooling support is there we just should use the OS
default.
- even though de_DE defaults to ISO-8859-15 (or ISO-8859-1 - the internets
can't seem to agree) I went with UTF-8 as that is what Gtk appears to
want to use internally. ISO-8859-15 encoded .mo files create funny
looking artefacts instead of Umlaute.
- no support at all in the Makefile - I was hoping someone with more
experience in how to best set this up would contribute a good set of
Makefile rules - likely this will help fix the first issue in that it
will also install the .mo file(s) in the correct place(s)
For now simply run
msgfmt -c -o subsurface.mo deutsch.po
to create the subsurface.mo file and then move it to
./locale/de_DE.UTF-8/LC_MESSAGES/subsurface.mo
If you make changes to the sources and need to add new strings to be
translated, this is what seems to work (again, should be tooled through
the Makefile):
xgettext -o subsurface-new.pot -s -k_ -kN_ --add-comments="++GETTEXT" *.c
msgmerge -s -U po/deutsch.po subsurface-new.pot
If you do this PLEASE do one commit that just has the new msgid as
changes in line numbers create a TON of diff-noise. Do changes to
translations in a SEPARATE commit.
- no testing at all on Windows or Mac
It builds on Windows :-)
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
otherwise the filedescriptor keeps open which prevents a
smooth unmounting as long as subsurface is open
Signed-off-by: Martin Gysel <me@bearsh.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Running under Valgrind showed a couple of silly bugs.
Worse, intentionally running into various error scenarios showed that we
could get the buffer handling in the raw parsing code to break down - we
would fail to process the correctly downloaded files.
To make it easier to get this right I restructured the code to collect the
XML buffer in a different way - this works much better and has stood up
well under testing so far.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Instead of trying to figure out in the GUI code whether to call the
downloader again, the logic was moved into the downloader itself. It now
attempts to deal cleverly with running out of space on the dive computer
filesystem - and in return is able to process the maximum number of dives
(instead of just ten or so at a time).
Even on partial reads before a failure we are able to collect the data
that was completely transferred and report those dives.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This includes one major hack that uses a private data structure from
libdivecomputer to allow us to show the Uemis Zurich as one computer the
user can import from.
Once the user has chosen the Uemis we don't use libdivecomputer but our
own downloader. Just like in the libdicecomputer case this runs in its own
thread and updates the import dialog with progress information.
The code also keeps track of the last dive that has been downloaded from a
Uemis computer so we only import new dives on subsequent downloads. And
since the Uemis Zurich gives us its device id, we make this a "per
divecomputer" property for people who dive with multiple Uemis Zurich
computers.
This uses the debugfile infrastructure to allow easily collecting
debugging output - especially on Windows where by default console output
is lost.
Known limitations: when the Uemis runs out of space (it uses its
filesystem for communication with the host computer) we have no graceful
way to reset things. This is why the code doesn't try to download ALL
dives on the computer but instead download them in increments of ten
dives. This clearly needs to be addressed once I understand how to reset
the device.
The Cancel button of the import dialog isn't correctly hooked up, yet.
I still need to figure out how to gracefully shut down a download without
potentially hanging the device.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>