I want to have some way to show multiple dive computers, so start off by
adding the name of the current one. So if we change dive computers,
we'll see it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
From 178a3f0d6d5112f76943fec5f8c1c1f3b173a7f4 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Fri, 7 Dec 2012 09:34:18 -0800
Subject: [PATCH 2/2] Tune the dive joining surface event insert code
So this makes us do surface events only if the samples are more than one
minute apart, and are shallow enough (randomly selected at 5m).
We can add more heuristics. Maybe we should compare the 1-minute sample
time limit of the previous sample to the time to the sample before that:
if some computer (or manually entered dive) has a long time between
*all* samples, we'd make the cut-off time longer.
Baby steps.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This doesn't really change the logic of the merging, but it does mean
that the end result tends to be less unexpected: when downloading dives
that end up being merged with pre-existing dives (because you have
multiple dive computers, for example), the newly downloaded dive data
will tend to be appended to the old dive data, rather than showing up
first.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Instead of just taking the vendors and products for the supported
divecomputers in the order libdivecomputer provides them to us we sort
them as we add them to our list of lists. While doing this we also track
the longest product name and try to make sure that the combobox for the
product is set to a fixed width that's wider then the longest product
name.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Loading these two XML files created a divelist with two trips that each
started at the same time but that were NOT linked together (because the
code in insert_trip detected that they started at the same time). Since
the trips were not linked together, trying to then merge those two trips
from the UI got us into an infinite loop, trying to move a dive from one
trip into the same trip (as the code couldn't find the trip that wasn't
linked into the dive_trip_list).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The existing code did not move the dives that are part of the second trip
to the first trip (and forgot to keep the 'better' notes as well).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Handle the case where we have no divecomputer information.
Reported-by: Sergey Starosek <sergey.starosek@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The length of that combo box got increasingly insane as libdivecomputer
supported more and more models. To make this more scalable we now have two
combo boxes. One with just the vendors and a second one with the products
depending on the vendor selected.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Several blatant mistakes prevented this from ever working.
Now we correctly record ndl / stoptime / stopdepth in every sample and no
longer issue bogus events.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When starting from the first dive on the dive computer we called getDive
for every dive starting with id 0 instead of figuring out which id is
actually the first one that we downloaded.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Several things were wrong.
- we saved it as floating point (that was stupid, given the locale issue
with that and given the fact that the precision was really artificial)
- we always saved it when set (we should only save it if the value is
different from our default of 1030g/l == salt water)
- most embarrassing - the unit we assigned was wrong. That's g/l, not
kg/l...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This actually makes us internally use 'micro-degrees' for latitude and
longitude, and we never turn them into floating point either at parse
time or save time.
That said, the Uemis downloader internally does still use atof() when
converting things, which is likely a bug (locale issues and all that),
but I'll ask Dirk to check it out.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I hate using floating point, this tries to at least make parts of it be
integer logic, and avoid the whole locale issue. This still keeps the
latitude and longitude internally as a floating point value, and parses
it that way, but I'm slowly moving towards less and less FP use.
We're going to use micro-degrees for location information: that's
sufficient to about a tenth of a meter precision, and it fits in a
32-bit integer.
If you specify dive sites with more precision than that, you may have
OCD.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is improving a bit more on commit d2dd0eb39efe "When starting with an
empty data file and downloading dives, number them" by providing a better
starting number when a user downloads dives from a Uemis SDA into an empty
data file.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We have been very careful not to mess with the numbering that a user may
intend - but one obvious case where we should automatically number the
dives appears to be the first time download from a dive computer. Right
now all dives show up with number '0' and that's just really ugly and a
bad experience for a first time user.
With this change if a user starts with an empty data file and downloads
dives from a computer for the first time, Subsurface will give them
numbers starting with '1'.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This small change to the Makefile allows you to call
make CLCFLAGS=-DDEBUG
or some other define directly from the command line. It gets added to the
CFLAGS without overwriting the CFLAGS.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This patch does 4 small divelist.c changes in the following
order of importance:
1) In find_trip_by_time() now there is a check if a trip is actually found
before looking at the "when" flag.
2) Make remember_tree_state() slighly safer. If for example we have recently
deleted a trip from the linked list, it may still exist in the GTK tree model,
thus we want to check when calling find_trip_by_time() if there is an actual
match before setting the "expanded" flag for a trip.
3) When merging two trips in merge_trips_cb(), only use the tree model
to retrieve the timestamps (DIVE_DATE) and then find matching trips with
find_matching_trip(). Once we have pointers to the two trips to be merged,
move dives from one to another iterating with add_dive_to_trip().
4) In merge_trips_cb() - remember the tree state, repopulate the tree and
restore tree state, since now we are not adding/removing rows directly.
tesdsad
Reported-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This tunes the heuristics for when to merge two dives together into a
single dive. We used to just look at the date, and say "if they're
within one minute of each other, try to merge". This looks at the
actual dive data, and tries to see just how much sense it makes to
merge the dive.
It also checks if the dives to be merged use different dive computers,
and if so relaxes the one minute to five, since most people aren't
quite as OCD as I am, and don't tend to set their dive computers quite
that exactly to the same time and date.
I'm sure people can come up with other heuristics, but this should
make that easier too.
NOTE! If you have things like wrong timezones etc, and the
divecomputer dates are thus off by hours rather than by a couple of
minutes, this will still not merge them. For that kind of situation,
we'd need some kind of manual merge option. Note that that is *not*
the same as the current "merge two adjacent dives" together, which
joins two separate dives into one *longer* dive with a surface
interval in between.
That kind of manual merge UI makes sense, but is independent of this
partical change.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This also replaces the old heuristic for when we are in deco with the
(hopefully correct) bits in the sample flags.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This commit changes the code that was recently introduced to deal with
deco ceilings. Instead of handling these through events we now store the
ceiling (which in reality is the deepest deco stop with all known dive
computers) and the stop time at that ceiling in the samples.
This also adds support for NDL (non stop dive limit) which both dive
computers that appear to give us ceiling / deco information appear to
give us as well (when the diver isn't in deco).
If the mouse hovers over the profile we now add support for displaying the
NDL, the current deco obligation and (if we are able to tell from the
data) whether we are at a safety stop.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This way individual pieces can be turned on and off.
The commit also adds code to read from a disk image (instead of the SDA)
without all the long timeouts.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Never make trivial changes without testing them. This was missung a '!'
before the strcmp - so the wrong code got executed when trying to get the
DeviceId and everything afterwards failed without a valid DeviceId.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This was necessary for the Uemis downloader when we used the SDA file
format as intermediary data format and imported that as XML buffer.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The downloader has been integrated into Subsurface for a while and with
the recent change to no longer have it create the old style SDA files as
intermediary format there is no need anymore to support that format in the
XML parser.
This deletes almost 300 lines of code. Yay!
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The initial downloader reused the XML parsing of SDA files that was
implemented early in order to support the information extracted from the
SDA with the java applet. But creating this intermediary XML file and
handing it off to the XML import function always seemed like an ugly way
to do things. This became even more obvious when adding more features to
the Uemis downloader.
This commit completely changes the downloader to instead create dives and
record them directly.
This also adds support for divespots (which are stored in a seperate
database that needs to be queried after the divelog and dive entries have
been combined - the Uemis firmware clearly was written by monkeys on
crack - oh wait: I'm trusting these same people to get the deco right?).
This commit leaves the SDA import capability in the XML parser intact.
I'll remove that later. Because of this it actually adds a few lines of
code, but the overall change will be a substantial code deletion.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Actually, it's even better than that. Thanks to the new divecomputer
datastructure we can now simply look up in the dive_table which dives have
been downloaded from this specific Uemis SDA.
This patch removes the old gconf based code - which leads to one
unfortunate problem: the first time a Uemis SDA owner runs this version of
Subsurface against their data file ALL dives will be downloaded again
(which may not be a bad thing as we have improved a few other details of
Uemis support so now they get their deco information, surface pressure and
other data that we have started to support since 2.1). Still, this is not
ideal. But I didn't want to keep the legacy code around since this new
solution is so much cleaner.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I was a little too eager to add the deco feature to Subsurface. Jef and I
went back and forth a few more times and the definition of those events
changed. I guess I shouldn't have commited that code until the
corresponding libdivecomputer code had been pushed.
This commit now brings us in sync with the current master of
libdivecomputer (but should compile with 0.2 as well - only deco events
won't work then).
One issue that I see is that deco / ndl aren't really a good fit for the
event model. I actually disabled the drawing of the little yellow
triangles for ndl events as for example on the Uemis those events are
created whenever the remaining non stop time changes - and that can be
every few seconds.
The correct solution may be to treat this as a function of the samples,
but for now this works and is tested with both OSTC and Uemis SDA.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Apply the decimal time conversion hack for JDiveLog import if there are
less than 2 digits in the decimal part (and value is less than 60).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Use the decimal time format fallback also for one digit numbers as
Linus suggested. Thus 1.1 min would result in 1 min 6 sec.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This actually triggers for one of our insane test dives (test15): it has
no samples, so we created a fake dive computer entry with the a fake
profile in it, but we didn't copy the events over.
Having a dive with no samples, yet having events from the dive computer,
sounds pretty bogus. But that test-case did show that when that bogus
situation happens, we had two independent buglets: (a) we didn't insert
the entries in the fake dive computer entry we used and (b) we would
then mix up the events of the fake dive computer entry with the first
dive computer of a dive.
Fix this, just to make test15 happy again. And eventually, when we
actually plot the information for multiple dive computers, fixing case
(b) would become necessary even for real dives.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This removes the tripflag name array, since it's not actually useful.
The only information we ever save in the XML file is whether a dive is
explicitly not supposed to ever be grouped with a trip ("NOTRIP"), and
everything else is implicit.
I'm going to simplify the trip flags further (possibly removing it
entirely - like I did for dive trips already), and don't like having to
maintain the tripflag_names[] array logic.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Both dives and dive trips have the same 'tripflag' thing, but they are
used very differently. In particular, for dive trips, the only case
that has any meaning is the TF_AUTOGEN case, so instead of having that
trip flag, replace it with a bitfield that says whether the trip was
auto-generated or not.
And make the one-bit bitfields explicitly unsigned. Signed bitfields
are almost always a mistake, and can be confusing.
Also remove a few now stale macros that are no longer needed now that we
don't do the GList thing for dive list handling, and our autogen logic
has been simplified.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is a hack to convert time stored in decimal notation to proper
seconds. When using metric units the default way of JDiveLog to store
seconds is to have the amount of seconds after decimal point (1.20 is 1
minute 20 seconds). In some odd case it is reportedly possible that the
seconds are actually 100 based, thus we need to convert that to seconds
(1.33333 will become 1 minute 20 seconds).
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This makes the dive trip auto-generation a separate pass from the
showing of the dive trips, which makes things much more understandable.
It simplifies the code a lot too, because it's much more natural to
generate the automatic trip data by walking the dives from oldest to
newest (while the tree model wants to walk the other way).
It gets rid of the most annoying part of using the gtk tree model for
dive trip management, but some still remains.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It had become a write-only field (apart from some now useless debugging)
when simplifying the remove_autogen_trips() function.
So remove it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I'm trying to remove (or at least simplify) the gtk tree model usage for
our trip handling, but I'm doing it in small chunks. The goal is to
just do all our trip handling logic explicitly using our own data
structures, and use the gtk tree model purely for showing the end
result.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We already kept a count of dives per trip in order to figure out when
there are no more dives left and the trip needs to be freed. Now we
explicitly keep track of the list of dives associated with the trip too,
which simplifies the "find the time of the trip" logic.
We may want to sort it in time, but for now this is mainly about trying
to keep track of the divetrip relationships explicitly. I want to move
away from the whole "use the gtk tree model to keep track of things"
approach.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When picking the "better" trip, we stupidly looked not at the trip
location, but at the _dive_ location.
Which obviously didn't actually pick the "better" trip information at
all, since it never actually looked at the trip itself.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit bb6b6b49a6d4 "Start merging dives by keeping the dive computer data
from both dives" created a compile time warning. This simply adds an #if /
Yes, this might accelearate bit rod in the code, but I just dislike the
warning message when compiling Subsurface.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have dive computer device ID fields etc, we can do a much
better job of merging the dive computer data.
The rule is
- if we actually merge two disjoint dives (ie extended surface interval
causing the dive computer to think the dive ended and turning two of
those dives into one), find the *matching* dive computer from the
other dive to combine with.
- if we are merging dives at the same time, discard old-style data with
no dive computer info (ie act like a re-download)
- if we have new-style dive computers with identifiers, take them all.
which seems to work fairly well.
There's more tweaking to be done, but I think this is getting to the
point where it largely works.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It's annoying to see water salinity data in the XML that isn't relevant,
and adding the default value just because the dive got downloaded from
libdivecomputer is definitely wrong.
We should set the water salinity explicitly only if we have it
explicitly set on the dive computer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This simplifies the vendor/product fields into just a single "model"
string for the dive computer, since we can't really validly ever use it
any other way anyway.
Also, add 'deviceid' and 'diveid' fields: they are just 32-bit hex
values that are unique for that particular dive computer model. For
libdivecomputer, they are basically the first word of the SHA1 of the
data that libdivecomputer gives us.
(Trying to expose it in some other way is insane - different dive
computers use different models for the ID, so don't try to do some kind
of serial number or something like that)
For the Uemis Zurich, which doesn't use the libdivecomputer import, we
currently only set the model name. The computer does have some kind of
device ID string, and we could/should just do the same "SHA1 over the
ID" to give it a unique ID, but the pseudo-xml parsing confuses me, so
I'll let Dirk fix that up.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Also, note that we do *not* do the "find_sample_offset()" any more when
we merge two dives that happen at the same time - since we just keep
both sets of dive computer data around.
But we keep the function to find the best offset around, because we may
well want to use it later when *showing* the dive, and trying to match
up the different sample data from the multiple dive computers associated
with the dive.
Because of that, this causes warnings about the now unused function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that we have more complete dive computer information, we can use
that to match the dives we download, and stop with the hacky "Would we
merge this" check.
For XML files without the explicit dive computer information, go back to
checking the exact dive time.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This also knows how to save and restore multiple dive computers in the
XML data, but there's no way to actually *create* that kind of
information yet (nor do we display it). Tested by creating fake XML
files with multiple dive computers by hand so far.
The dive computer information right now contains (apart from the sample
and event data that we've always had):
- the vendor and product name of the dive computer
- the date of the dive according to the dive computer (so if you change
the dive date manually, the dive computer date stays around)
Note that if the dive computer date matches the dive date, we won't
bother saving the redundant information in the XML file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>