When the data file is closed we should reset the events that we offer for
filtering.
Reported-by: Sergey Starosek <sergey.starosek@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit a52b0aa5ea8d ("Show Gradient Factors in plot when showing
calculated ceilings") incorrectly modified the gc which caused the mouse
position no longer correctly being correlated to the time on the plot.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
When we create the event names, the name itself does not include the
information about whether the event is the beginning or end of some
state, so we end up having things like events named "deco" and then in
the event flags it says whether this is the *beginning* of deco, or the
end.
And when we show the event, we only used to show the name. This patch
makes us show whether it's the begin or end event for events that have
those flags. So now you see "deco begin" and "deco end" instead of just
two events both called "deco".
It would perhaps be nice if we somehow showed the range between the
events too, and paired them up visually some way, but that's a separate
and much more difficult thing to do.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Mostly coding style and whitespace changes plus making lots of functions
static that have no need to be extern. This also helped find a bit of code
that is actually no longer used.
This should have absolutely no functional impact - all changes should be
purely cosmetic. But it removes a bunch of lines of code and makes the
rest easier to read.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This adds the GFlow/high values used to calculate the ceiling (if any).
Right now it shows those numbers even if at no point of the dive there was
an actual ceiling (but only if showing the ceiling itself is enabled).
This should make it easier to for the user to make sense of the calculated
ceiling, especially if posting screen shots.
As an aside - for some dive computers like the OSTC and the Shearwaters we
should be able to also plot the GF used by its calculation which might be
interesting for comparison purposes, as both of them also give us the
ceiling (lowest deco stop) calculated during the dive..
See #13
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This made sense briefly when libdivecomputer reported ceiling data through
events with those flags, but it actually made us hide valid events from
some divecomputers that give us only very limited information (e.g., deco
events from some Suunto divecomputers).
Reported-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Analyzed-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The plot-info per-event 'same_cylinder' logic was fragile, and caused
us to not print the beginning pressure of the first cylinder.
In particular, there was a nasty interaction with not all plot entries
having pressures, and the whole logic that avoid some of the early
plot entries because they are fake entries that are just there to make
sure that we don't step off the edge of the world. When we then only
do certain things on the particular entries that don't have the same
cylinder as the last plot entry, things don't always happen like they
should.
Fix this by:
- get rid of the computed "same_cylinder" state entirely. All the
cases where we use it, we might as well just look at what the last
cylinder we used was, and thus "same_cylinder" is just about testing
the current cylinder index against that last index.
- get rid of some of the edge conditions by just writing the loops
more clearly, so that they simply don't have special cases. For
example, instead of setting some "last_pressure" for a cylinder at
cylinder changes, just set the damn thing on every single sample. The
last pressure will automatically be the pressure we set last! The code
is simpler and more straightforward.
So this simplifies the code and just makes it less fragile - it
doesn't matter if the cylinder change happens to happen at a sample
that doesn't have a pressure reading, for example, because we no
longer care so deeply about exactly which sample the cylinder change
happens at. As a result, the bug Mika noticed just goes away.
Reported-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Commit b625332ca5ff "Display even constant temperature graph" was a little
too aggressive. If we have no temperature data at all it caused us to plot
a temperature line for absolute zero...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Dive profile does not display the temperature graph, if we have a
constant temperature (e.g. only one reading at the start of the dive).
This patch draws the temperature graph even if max and min temperatures
are the same.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Temperatures can actually be negative, which means that rounding by
adding 0.5 and casting to 'int' is not correct.
We could use '(int)(rint(val))' instead, but the only place we care
about might as well just print out the floating point representation
with a precision of two digits instead. So if you have a dive computer
that gives you the precision, you might see '3.5˚C' as the temperature.
Remove the helper functions that nobody uses and that get the rounding
wrong anyway.
Reported-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This moves the fields 'duration', 'surfacetime', 'maxdepth',
'meandepth', 'airtemp', 'watertemp', 'salinity' and 'surface_pressure'
to the per-divecomputer data structure. They are filled in by the dive
computer, and normally not edited.
NOTE! All actual *use* of this data was then changed from dive->field to
dive->dc.field programmatically with a shell-script and sed, and the
result then edited for details. So while the XML save and restore code
has been updated, all the displaying etc will currently always just show
the first dive computer entry.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This patch removes the need for the "string" pressurebuf in planner.c.
It also adds a unit to the partial pressures displayed in the mouse
overlay which are always displayed in bar.
BTW: Has anyone seen a pO2 shown in PSI?
Signed-off-by: Jan Schubert <Jan.Schubert@GMX.li>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This patch centralizes the definition for surface pressure, oxygen in
air, (re)defines all such values as plain integers and adapts calculations.
It eliminates 11 (!) occurrences of definitions for surface pressure and
also a few for oxygen in air.
It also rewrites the calculation for EAD, END and EADD using the new
definitons, harmonizing it for OC and CC and fixes a bug for EADD OC
calculation.
And finally it removes the unneeded variable entry_ead in gtk-gui.c.
Jan
Signed-off-by: Jan Schubert <Jan.Schubert@GMX.li>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Previously we calculate the ceiling at every single second, using the
interpolated depth but then only *save* the ceiling at the points where we
have a profile event (the whole deco_allowed_depth() function doesn't
change any state, so we can just drop it entirely at points that we aren't
going to save)
Why is it incorrect? I'll try to walk through my understanding of it, by
switching things around a bit.
- the whole "minimum tissue tolerance" thing could equally well be
rewritten to be about "maximum ceiling". And that's easier to think
about (since it's what we actually show), so let's do that.
- so turning "min_pressure" into "max_ceiling", doing the whole
comparison inside the loop means is that we are calculating the
maximum ceiling value for the duration of the last sample. And then
instead of visualizing the ceiling AT THE TIME OF MAXIMUM CEILING, we
visualize that maximal ceiling value AT THE TIME OF THE SAMPLE.
End result: we visualize the ceiling at the wrong time. We visualize
what was *a* ceiling somewhere in between that sample and the previous
one, but we then assign that value to the time of the sample itself.
So it ends up having random odd effects.
And that also explains why you only see the effect during the ascent.
During the descent, the max ceiling will be at the end of our
linearization of the sampling, which is - surprise surprise - the position
of the sample itself. So we end up seeing the right ceiling at the right
time while descending. So the visualization matches the math.
But during desaturation, the maximum ceiling is not at the end of the
sample period, it's at the beginning. So the whole "max ceiling" thing has
basically turned what should be a smooth graph into something that
approaches being a step-wise graph at each sample. Ergo: a ripple.
And doing the "max_ceiling during the sample interval" thing may sound
like the safe thing to do, but the thing is, that really *is* a false
sense of safety. The ceiling value is *not* what we compute. The ceiling
value is just a visualization of what we computed. Playing games with it
can only make the visualization of the real data worse, not better.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
- MOD: Maximum Operation Depth based on a configurable limit
- EAD: Equivalent Air Depth considering N2 and (!) O2 narcotic
- END: Equivalent Nitrogen (Narcotic) Depth considering just N2 narcotic
(ignoring O2)
- EADD: Equivalent Air Density Depth
Please note that some people and even diving organisations have opposite
definitions for EAD and END. Considering A stands for Air, lets choose the
above. And considering N for Nitrogen it also fits in this scheme.
This patch moves N2_IN_AIR from deco.c to dive.h as this is already used
in several places and might be useful for future use also. It also
respecifies N2_IN_AIR to a more correct value of 78,084%, the former one
also included all other gases than oxygen appearing in air. If someone
needs to use the former value it would be more correct to use 1-O2_IN_AIR
instead.
Signed-off-by: Jan Schubert / Jan.Schubert@GMX.li
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The max Y value of the partial pressure graph grid tends to be way too
high when only pO2 or pHe is enabled.
Signed-off-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
.. and rename the badly named 'output_units/input_units' variables.
We used to have this confusing thing where we had two different units
(input vs output) that *look* like they are mirror images, but in fact
"output_units" was the user units, and "input_units" are the XML parsing
units.
So this renames them to be clearer. "output_units" is now just "units"
(it's the units a user would ever see), and "input_units" is now
"xml_parsing_units" and set by the XML file parsers to reflect the units
of the parsed file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We have several places where we interpolate the depth based on two
samples and the time between them. Some of them use floating point, some
of them don't, some of them meant to do it but didn't.
Just use a common helper function for it. I seriously doubt the floating
point here really matters, since doing it in integers is not going to
overflow unless we're interpolating between two samples that are hours
apart at hundreds of meters of depth, but hey, it gives that rounding to
the nearest millimeter. Which I'm sure matters.
Anyway, we can probably just get rid of the rounding and the floating
point math, but it won't really hurt either, so at least do it
consistently.
The interpolation could be for other things than just depth, but we
probably don't have anything else we'd want to interpolate. But make the
function naming generic just in case.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
While one might argue that multiple samples with the same time are 'odd'
that still shouldn't be an excuse to incorrectly reset the ceiling value
for them back to 0.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
o) Instead of using gradient factors as means of comparison, I now use
pressure (as in: maximal ambient pressure).
o) tissue_tolerance_calc() now computes the maximal ambient pressure now
respecting gradient factors. For this, it needs to know about the
surface pressure (as refernce for GF_high), thus gets *dive as an
argument. It is called from add_segment() which this also needs *dive
as an additional argument.
o) This implies deco_allowed_depth is now mainly a ambient-pressure to
depth conversion with decorations to avoid negative depth (i.e. no deco
obliation), implementation of quantization (!smooth => multiples of 3m)
and explicit setting of last deco depth (e.g. 6m for O2 deco).
o) gf_low_pressure_this_dive (slight change of name), the max depth in
pressure units is updated in add_segment. I set the minimal value in
buehlmann_config to the equivalent of 20m as otherwise good values of
GF_low add a lot of deco to shallow dives which do not need deep stops
in the first place.
o) The bogus loop is gone as well as actual_gradient_limit() and
gradient_factor_calculation() and large parts of deco_allowed_depth()
although I did not delete the code but put it in comments.
o) The meat is in the formula in lines 147-154 of deco.c. Here is the
rationale:
Without gradient factors, the M-value (i.e the maximal tissue pressure)
at a given depth is given by ambient_pressure / buehlmann_b + a.
According to "Clearing Up The Confusion About "Deep Stops" by Erik C.
Baker (as found via google) the effect of the gradient factors is no
replace this by a reduced affine relation (i.e. another line) such that
at the surface the difference between M-value and ambient pressure is
reduced by a factor GF_high and at the maximal depth by a factor
GF_low.
That is, we are looking for parameters alpha and beta such that
alpha surface + beta = surface + gf_high * (surface/b + a - surface)
and
alpha max_p + beta = max_p + gf_low * (max_p/b + a - max_p)
This can be solved for alpha and beta and then inverted to obtain the
max ambient pressure given tissue loadings. The result is the above
mentioned formula.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
A strange and buggy dive where time goes backwards (right now easy to
create with the dive plan editor) can cause us to run out of plot info
elements.
This prevents that from causing memory corruption by refusing to go back
in time.
Reported-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Now that the pressure_time calculations are done in our "native"
integer units (millibar and seconds), we might as well keep using
integer variables.
We still do floating point calculations at various stages for the
conversions (including turning a depth in mm into a pressure in mbar),
so it's not like this avoids floating point per se. And the final
approximation is still done as a fraction of the pressure-time values,
using floating point. So floating point is very much involved, but
it's used for conversions, not (for example) to sum up lots of small
values.
With floating point, I had to think about the dynamic range in order
to convince myself that summing up small values will not subtly lose
precision.
With integers, those kinds of issues do not exist. The "lost
precision" case is not subtle, it would be a very obvious overflow,
and it's easy to think about. It turns out that for the pressure-time
integral to overflow in "just" 31 bits, we'd have to have pressures
and times that aren't even close to the range of scuba cylinder air
use (eg "spend more than a day at a depth of 200+ m").
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
I fixed the pressure-time calculations to use "proper" units, but
thinking about it some more, it turns out that units don't really
matter. As long as we use the *same* unit for calculating the
integral, and then re-calculating the step-wise entries, the units
will cancel out.
So we can simplify the "pressure_time()" function a bit, and use
whatever units are most natural for our internal representation. So
instead of using atm, use "mbar".
Now, since the units don't matter, this patch doesn't really make much
of a difference conceptually. Sure, it's a slightly simpler function,
but maybe using more "natural" units for it would be worth it. But it
turns out that using milli-bar and seconds has an advantage: we could
do all the pressure_time integral using 32-bit integers, and we'd
still be able to represent values that would be equivalent to staying
at 24 bar for a whole day.
This patch doesn't actually change the code to use integers, but with
this unit choice, we at least have that possibility.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This splits up the function to create the estimated pressures for
missing tank pressure information.
The code now has a separate pass to create the beginning and ending
pressures for segments that lack them, and fill them in to match the
overall SAC-rate for that cylinder.
In the process, it also fixes the calculation of the interpolated gas
pressure: you can see this in test-dive 13, where we switch back to the
first tank at the end of the dive. It used to be that the latter
segment of that cylinder showed in a different color from the first
segment, showing that we had a different SAC-rate. But that makes no
sense, since our interpolation is supposed to use a constant SAC-rate
for each cylinder.
The bug was that the "magic" calculation (which is just the pressure
change rate over pressure-time) was incorrect, and used the current
cylinder pressure for start-pressure calculation. But that's wrong,
since we update the current cylinder pressure as we go along, but we
didn't update the total pressure_time.
With the separate phase to calculate the segment beginning/ending
pressures, the code got simplified and the bug stood out more.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The code was using bar, not atm to calculate the pressure_time
multiplier. But SAC-rate is relative to atm.
We could do the correction at the end (and keep the pressure_time in
"bar-seconds"), but let's just use the expected units during the
integration. Especially since this also makes a helper function to do
the calculations (with variables to keep the units obvious) instead of
having multi-line expressions that have the wrong units.
This fixes what I thought were rounding errors for the pressure graphs.
They were just unit confusion.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This splits up the (very complex) function that calculates all the plot
info data, so that the gas pressure logic is in several helper
functions, and the deco and partial pressure calculations are in a
function of their own.
That makes the code almost readable.
This also changes the cylinder pressure calculations so that if you have
manually set the beginning and end pressures, those are the ones we will
show (by making them fake "sensor pressures"). We used to shopw some
random pressure that was related to the manually entered ones only
distantly (through various rounding phases and the SAC-rate calculations).
That does make the rounding errors more obvious in the graph, but we can
fix that separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This simplifies - and improves - the code to generate the plot info
entries from the samples.
We used to generate exactly one plot info entry per sample, and then -
because the result doesn't have high enough granularity - we'd
generate additional plot info entries at gas change events etc.
Which resulted in all kinds of ugly special case logic. Not only for
the gas switch, btw: you can see the effects of this in the deco graph
(done at plot entry boundaries) and in the gas pressure curves.
So this throws that "do special plot entries for gas switch events"
code away entirely, and replaces it with a much more straightforward
model: we generate plot entries at a minimum of ten-second intervals.
If you have samples more often than that, you'll get more frequent
plot entries, but you'll never get less than that "every ten seconds".
As a result, the code is smaller and simpler (99 insertions, 161
deletions), and actually does a better job too.
You can see the difference especially in the test dives that only have
a few entries (or if you create a new dive without a dive computer,
using the "Add Dive" menu entry). Look at the deco graph of test-dive
20 before and after, for example. You can also see it very subtly in
the cylinder pressure curves going from line segments to curves on
that same dive.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
For dives with no samples, we crate a fake dive computer with a set of
made-up samples and use those to display the profile.
However, the actual calculations to do the maximum duration and depth
etc were always done with the "real" dive information, which is empty.
As a result, the scale of the plot ended up being bogus, and part of
the dive would be missing.
Trivially fix by just passing the same dive computer information to
calculate_max_limits() that we use for everything else.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This comes with absolutely no gui - so the plan literally needs to be
compiled into Subsurface. Not exactly a feature, but this allowed me to
focus on the planning part instead of spending time on tedious UI work.
A new menu "Planner" with entry "Test Planner" calls into the hard-coded
function in planner.c. There a simple dive plan can be constructed with
calls to plan_add_segment(&diveplan, duration, depth at the end, fO2, pO2)
Calling plan(&diveplan) does the deco calculations and creates deco stops
that keep us below the ceiling (with the GFlow/high values currently
configured). The stop levels used are defined at the top of planner.c in
the stoplevels array - there is no need to do the traditional multiples of
3m or anything like that.
The dive including the ascents and deco stops all the way to the surface
is completed and then added as simulated dive to the end of the divelist
(I guess we could automatically select it later) and can be viewed.
This is crude but shows the direction we can go with this. Envision a nice
UI that allows you to simply enter the segments and pick the desired
stops.
What is missing is the ability to give the algorithm additional gases that
it can use during the deco phase - right now it simply keeps using the
last gas used in the diveplan.
All that said, there are clear bugs here - and sadly they seem to be in
the deco calculations, as with the example given the ceiling that is
calculated makes no sense. When displayed in smooth mode it has very
strange jumps up and down that I wouldn't expect. For example with GF
35/75 (the default) the deco ceiling when looking at the simulated dive
jumps from 16m back up to 13m around 14:10 into the dive. That seems very
odd.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Without this the cairo_close_path call could do silly looking things
(intersecting polygons...).
Reported-by: "Robert C. Helling" <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The old implementation was broken in several ways.
For one thing the GF values are percentages, so they should normally be
0 < GF < 1 (well, some crazy people like to go above that).
With this most of the Bühlmann config constants were wrong.
Furthermore, after we adjust the pressure tolerance based on the gradient
factors, we need to convert this back into a depth (instead of passing
back the unmodified depth - oops).
Finally, this commit adds closed circuit support to the deco calculations.
Major progress and much more useful at this stage.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This also initializes the N2 tissue saturations to correct numbers
(setting them to zero was clearly silly).
With this commit we walk back in the dive_table until we find a surface
intervall that's longer than 48h. Or a dive that comes after the last one
we looked at; that would indicate that this is a divelist that contains
dives from multiple divers or dives that for other reasons are not
ordered. In a sane environment one would assume that the dives that need
to be taken into account when doing deco calculations are organized as one
trip in the XML file and so this logic should work.
One major downside of the current implementation is that we recalculate
everything whenever the plot_info is recreated - which happens quite
frequently, for example when resizing the window or even when we go into
loup mode. While this isn't all that compute intensive, this is an utter
waste and we should at least cache the saturation inherited from previous
dives (and clear that number when the selected dive changes). We don't
want to cache all of it as the recreation of the plot_info may be
triggered by the user changing equipment (and most importantly, gasmix)
information. In that case the deco data for this dive does indeed have to
be recreated. But without changing the current dive the saturation after
the last surface intervall should stay the same.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Usually dive computers show the ceiling in terms of the next deco stop -
and those are in 3m increments. This commit also adds the ability to chose
either the typical 3m increments or the smooth ceiling that the Bühlmann
algorithm actually calculates.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This is on top of the deco information reported by the dive computer (in a
different color - currently ugly green). The user needs to enable this via
the Tec page of the preferences.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The existing code had the somewhat retarded Ctrl-C binding for displaying
the next divecomputer and no way to go back to the previous one. With this
commit we use our keyboard grab to map Left and Right to previous and next
divecomputer. Much nicer.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This clarifies/changes the meaning of our "cylinderindex" entry in our
samples. It has been rather confused, because different dive computers
have done things differently, and the naming really hasn't helped.
There are two totally different - and independent - cylinder "indexes":
- the pressure sensor index, which indicates which cylinder the sensor
data is from.
- the "active cylinder" index, which indicates which cylinder we actually
breathe from.
These two values really are totally independent, and have nothing
what-so-ever to do with each other. The sensor index may well be fixed:
many dive computers only support a single pressure sensor (whether
wireless or wired), and the sensor index is thus always zero.
Other dive computers may support multiple pressure sensors, and the gas
switch event may - or may not - indicate that the sensor changed too. A
dive computer might give the sensor data for *all* cylinders it can read,
regardless of which one is the one we're actively breathing. In fact, some
dive computers might give sensor data for not just *your* cylinder, but
your buddies.
This patch renames "cylinderindex" in the samples as "sensor", making it
quite clear that it's about which sensor index the pressure data in the
sample is about.
The way we figure out which is the currently active gas is with an
explicit has change event. If a computer (like the Uemis Zurich) joins the
two concepts together, then a sensor change should also create a gas
switch event. This patch also changes the Uemis importer to do that.
Finally, it should be noted that the plot info works totally separately
from the sample data, and is about what we actually *display*, not about
the sample pressures etc. In the plot info, the "cylinderindex" does in
fact mean the currently active cylinder, and while it is initially set to
match the sensor information from the samples, we then walk the gas change
events and fix it up - and if the active cylinder differs from the sensor
cylinder, we clear the sensor data.
[Dirk Hohndel: this conflicted with some of my recent changes - I think
I merged things correctly...]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This commit makes deco handling in Subsurface more compatible with the way
libdivecomputer creates the data. Previously we assumed that having a
stopdepth or stoptime and no ndl meant that we were in deco. But
libdivecomputer supports many dive computers that provide the deco state
of the diver but with no information about the next stop or the time
needed there. In order to be able to model this in Subsurface this adds an
in_deco flag to the samples. This is only stored to the XML file when it
changes so it doesn't add much overhead but will allow us to display some
deco information on dive computers like the Atomic Aquatics Cobalt or many
of the Suuntos (among others).
The commit also removes the old event based deco code that was commented
out already. And fixes the code so that the deco / ndl information is
stored for the very last sample as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We only store the model/deviceid/nickname for those dive computers that
are mentioned in the XML file. This should make the XML files nicely
selfcontained.
This also changes the code to consistently use model & deviceid to
identify a dive computer. The deviceid is NOT guaranteed to be collision
free between different libdivecomputer backends...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Having it there with the model information seemed to make sense but on
second thought it's the wrong spot to keep that information, especially
since we were storing it in the XML file in every single dive.
This change removes the nickname member from the divecomputer and makes
the rest of the code reasonably self consistent. It does not add much of
the new code for the new design to handle nicknames.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This adds the capability to actually view all your dive computers, by
adding a menu item under "Log"->"View"->"Next DC" to show the next dive
computer.
Realistically, if you actually commonly use this, you'd use the
accelerator shortcut. Which right now is Ctrl-C ("C for Computer"),
which is probably a horrible choice.
I really would want to have nice "next/prev dive" accelerators too,
because the cursor keys don't work very well with the gtk focus issues.
Being able to switch between dives would also make the "just the dive
profile, maam" view (ctrl-2) much more useful.
The prev/next dive in the profile view should probably be done with a
keyboard action callback, which also avoids some of the limitations of
accelerators (ie you can make any key do the action). Some gtk person,
please?
Anyway, this commit only does the dive computer choice thing, and only
using the accelerators.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This changes two things to improve the appearance of the profile:
- the partial pressure scale is now in 0.5 increments if the total is <= 4
and in 1.0 increments if it is > 4.
- the depth marker lines end slightly below the depth chart so that we no
longer have overlap between the depth scale and the partial pressure
scale.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In profile.c:create_plot_info(), store the last address in which
memory was allocated for the plot data entries in the static
variable "last_pi_entry". If "last_pi_entry" isn't a NULL
pointer in each call to create_plot_info(), free memory at that
address.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We maintain a list of dive computers that we know about (by deviceid) and
their nicknames in our config. If the user downloads dive from a dive
computer that we haven't seen before, we give them the option to set a
nickname for that dive computer. That nickname is displayed in the profile
(and stored in the XML file, assuming it is not the same as the model).
This implementation attempts to make sure that it correctly deals with
utf8 nicknames.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Showing the depth scale all the way to the bottom of the profile plot
looks strange when there are partial pressure graphs down there. So
instead we only plot down to the next marker below the maximum depth of
the actual dive.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>