Commit graph

8 commits

Author SHA1 Message Date
Andrew Clayton
7fe652ab57 file.c: Fix a file descriptor leak in readfile()
In file.c::readfile() the file was being opened once at fd declaration
time and then again a few lines later and only being closed once. Remove
the open() at fd declaration time leaving the later one where the fd check
is done.

Signed-off-by: Andrew Clayton <andrew@digital-domain.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-07-12 18:19:47 -07:00
Linus Torvalds
e96a1864be Fix cochran CSV pressure data import
The cochran CSV pressure data is actually in units of '4 psi', not in
just psi.  That seems to be the resolution cochran internally keeps
things in, and unlike the depth reading there's no conversion to
standard units in the export (for depth, the quarter-foot depth
resolution is converted to tenths of feet when exporting).

Yeah, none of this makes any sense to me either, but I knew it was the
case.  I had just forgotten that factor-of-four when I did the importer.

With this fix, I get the same subsurface data (modulo some rounding
differences particularly for temperature) whether I go through David
McNett's UDDF converter, or just import the CSV data directly.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-06-19 22:41:44 -07:00
Linus Torvalds
ba31e37063 cochran: add support for importing the exported CSV files
The Cochran Analyst software can export the basic dive information as
CSV files (comma-separated values).

Individual CSV files contain just one particular type of information:
depth, temperature or cylinder pressure, which is rather inconvenient.
However, the way subsurface works, you can just import these CSV files
all as individual dives, and then subsurface will automatically merge
the dives with the same date and time - and in the process it will also
merge all the samples.

So it turns out that we don't really need any special handling.  You can
literally just do

     subsurface <list-your-cochran-export-files-here>

and you're all done.

Of course, the CSV files really *are* pretty useless, since they don't
contain all the nice information about where the dive took place etc.
So you literally just get the dive profile.  But that's better than
getting nothing at all.

I'd love to actually be able to parse the real native Cochran Analyst
software CAN files, but in the meantime this is at least a starting
point.  And if I'm ever able to parse those nasty CAN-files, this makes
comparisons with the exports much easier.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-06-19 20:07:42 -07:00
Linus Torvalds
b1a747f537 Add some initial cochran CAN file parsing
It's broken, and currently only writes out a debug output file per dive.
I'm not sure I'll ever really be able to decode the mess that is the
Cochran ANalyst stuff, but I have a few test files, along with separate
depth info from a couple of the dives in question, so in case this ever
works I can at least validate it to some degree.

The file format is definitely very intentionally obscured, though.
Annoying.  It's not like the Cochran software is actually all that good
(it's really quite a horribly nasty Windows-only app, I'm told).

Cochran Analyst is very much not the reason why people would buy those
computers.  So Cochran making their computers harder to use with other
software is just stupid.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-27 12:43:40 -08:00
Linus Torvalds
bea6637c03 Import: always open and read the file before checking the filename extension
Most of the parsers will want the content in memory, so keep them
simple.  The fact that the Suunto parser uses "libzip" that has to
re-open the file is annoying and causes us to re-open the file etc.

But it's the odd man out, so don't design the "open_by_filename()"
function around it.  Pretty much everybody else will want to avoid
having to cook up their own IO routines.

Also, when reading the file, NUL-terminate the buffer.  This allows us
to just treat text files as large strings if we want to, and doesn't
matter for binary files (we still pass in the length explicitly).

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-27 10:56:36 -08:00
Linus Torvalds
34d682416f Fix typo ('suundo' instead of 'suunto')
I apparently was so congested that it affected my typing too when I
wrote that, and then copy-paste meant that the use and declaration
matched despite the misspelling.

Reported-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-27 08:11:24 -08:00
Linus Torvalds
a65b9b48e0 Add "native" Suunto SDE zip file reading
You need to have libzip-devel installed, and pkg-config needs to know about it
for the build to pick up on it.

On at least Fedora, a simple "yum install libzip-devel" will make things
work, although you may need to force a rebuild of subsurface too (the
"file.o" file in particular - the Makefile doesn't track system
dependencies).

Then, you can just do

   subsurface my-dives.SDE

to read the data directly from the SDE file.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-26 17:43:33 -08:00
Linus Torvalds
4d10bc017a Split up file reading from 'parse-xml.c' into 'file.c'
We're going to eventually import non-xml files too, so let's begin
splitting the logic up.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-26 13:00:45 -08:00