That was the whole point of the previous change.
Also, run the build number creation on a pull request as well (at least for a
while) so we don't need to create new releases in order to test that part of
the process).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In order to make it easier to see what's happening inside get-atomic-buildnr.sh
write the result to a file that can be read by the caller. Not quite as
elegant, but hopefully more practical to see what's going wrong when no new
build number is created.
Make sure that post-releasenotes is successfull by actually posting a release
artifact (apparently the gh release action otherwise quietly fails).
Try to ensure we find the Android APK when uploading to the release.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Some experimentation showed what should have been obvious. The release
information is additive. So it's enough if ONE of the actions creates release
notes, all the others can simply add additional release artifacts.
To make this more obvious, this commit creates a new action that does nothing
but create the release notes and publish the release. Since it really doesn't
do anything else, it's likely to be the quickest to complete, but that doesn't
matter - the last action that has a body or body_path in the gh-release action
determines the release notes. And we now have exactly one action that does so.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Instead of using a thirdparty action and painfully passing things around,
simply use the GitHub CLI (gh) and assemble the release notes on the fly.
This makes for much simpler and much easier to maintain code.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Move both code and the release note text into files that can be shared between
multiple actions.
This should make the actions smaller and easier to read and since this is used
in several actions it should make things much easier to maintain.
In order to test this without too much unnecessary noise, this commit only
changes the android workflow - the others will be changed in a later commit
once his has been tested and works (again, this can really only be tested by
merging the PR into master).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Move around the scripts required for the setup of the build environment
for android to satisfy docker's requirement of locality.
This allows the removal of an extra copy step, and avoids the creation
of extra artefacts, while still providing the same functionality.
Signed-off-by: Michael Keller <github@ike.ch>
- for now all versions start with v6.0
- CICD builds use the monolithic build number as patch level, e.g. v6.0.12345
- local builds use the following algorithm
- find the newest commit with a CICD build number that is included in the
working tree
- count the number of commits in the working tree since that commit
- if there are no commits since the last CICD build, the local build version
will be v6.0.12345-local
- if there are N commits since the last CICD build, it will be
v6.0.12345-N-local
- test builds in the CICD that don't create artifacts simply use a dummy release
in order to not incorrectly increment the build number and also not to waste
time and resources by manually checking out the nightly-build repo for each of
these builds.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
They are now the four digit version dash build nr
So major.minor.patch.commitsSinceTag-buildNr
This makes it easier to correlate the release name and a specific manually
built version.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Change the name of the `GITHUB_WORKSPACE` environment variable in the
android build script to `OUTPUT_DIR`, which is more intuitive when the
script is used for local builds.
Also test if the variable is defined before attempting to use it as the
target of the build output.
Signed-off-by: Michael Keller <github@ike.ch>
Prevent attempts to generate a build number for pull request builds as
they will fail due to the lack of permissions on the
`subsurface/nightly-builds` repository.
Signed-off-by: Michael Keller <github@ike.ch>
The necessary keys to do so aren't available (and of course we don't try
to post a release on pull requests, anyway).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
What a pain. It turns out that github.run_number is counting the number of
times a specific workflow has been run - but that's different for different
workflows, so using that won't get us a single tag with all the corresponding
build artifacts. And sadly I can't find a simple atomic way to increase a
GitHUb repo variable, so I came up with this somewhat convoluted dance, using
the the fact that a push to an existing brach that isn't a fast-forward will
fail.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Fix the 'Snap USNs' action.
According to https://bugs.launchpad.net/lazr.restfulclient/+bug/2041407
the an incompatibility is introduced by the move from python 3.11 to
3.12, and a workaround is to pin the version to 3.11.
Signed-off-by: Michael Keller <github@ike.ch>
This way our ongoing releases will be in their own repo.
Also, use a nicer date format (at least I think this looks nicer).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We need some additional options when building the package, so let that script
handle the details and use the generic build script mainly for the dependencies.
Also let's not mix building for testing and building the DMG - just so I can
stay somewhat sane.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Pull requests can be triggered by anyone - we should not publish code
that comes in through pull requests to either GitHub releases or
Launchpad, Copr, etc.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Fix deprecation warnings for actions using a deprecated version of node.
Also switch to a fixed version of the environment in order to avoid
future deprecation warnings.
Signed-off-by: Michael Keller <github@ike.ch>
Even on platforms that don't have the new git version, yet.
And using the convoluted way to create an environment variable that should
point to our checked out tree in the GitHub Action. The more obvious ways
have resulted in failed builds for obscure reasons.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
As some Linux distros start to ship both Qt5 and Qt6, it actually makes more
sense to build only against Qt6 when the user explicitly asks for it. Having it
preferred over Qt5 seems completely wrong in hind sight.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This suddenly started. A couple of build would fail because the git submodule
checkout fails because of directory ownership issues. Hopefully this will fix
it.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Hirsute is EOL, so we need to move to Impish.
Adding Fedora 35 allows us to do a simple test against Qt 6.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This GitHub Action started failing. Groovy was EOL'ed six months ago and
downloads from the Ubuntu servers of Groovy components are no longer
supported.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The docker container for Tumbleweed has been broken for a while now.
Given the Hirsuite gives us Qt 5.15 testing, I guess it makes sense to
drop Tumbleweed for now.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We simply don't use release candidates in Subsurface these days, and no one
then moves these builds to stable after testing, so stable has been getting
stale while the builds that people SHOULD use have been sitting in candidate.
Of course, this will only become the default after our next release (as I don't
want four digit versions in a release build, so I can't simply add this to our
snap-stable branch).
Oh well - 5.0.3 will happen soon, given the print resolution issue for icons.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This release drops the qt5-default package - which really wasn't needed since
focal. So just drop it on all of the builds after 18.04 (bionic).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It's debatable if it makes sense to continue building on Trusty. The AppImage
community moved on to Xenial for a reason. But for now let's just make sure the
CI builds don't all break.
Suggested-by: Simon Peter <probono@puredarwin.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Update README and ReleaseNotes.
Also remove outdated workflow badge, add a couple new one, and hack around a
rendering issue where the last character of longer workflow names gets
overwritten by the status - which resulted in the arguably most important info
(which Qt version) being hidden.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Android and iOS use qmake, so add the code to the .pro file.
This also removes all remnants of QCharts includes and uses and all the
references to QCharts in our various build systems.
That was a brief but extremely useful detour.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This workflow will download the current snaps published in the `candidate`
channel for all architectures and check them for packages with published
Ubuntu Security Notices. If it finds one, it will trigger a build of the
snap recipe:
https://code.launchpad.net/~subsurface/+snap/subsurface-stable
This will rebuild the snap with patched packages and publish it to the
`candidate` channel.
Signed-off-by: Michał Sawicz <michal@sawicz.net>
Trying to keep the different build environments consistent I messed up and
dropped wget and curl from the Coverity build. Moving them to the beginning of
the list so they stand out more.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Update to Qt 5.12.10, latest OpenSSL, add QtChart, add other missing packages.
Also switch to gcc-7 as our statistics code requires better C++17 support than
what gcc-6 can offer.
This then creates trusty-qt512:1.1
Signed-off-by: Subsurface CI <dirk@hohndel.org>
This is kind of a random choice - I don't see much value to build this
everywhere, but it's kinda neat to use this to test that the -all option works
correctly and does the right thing with WebKit now. And it will also ensure
that the downloader build isn't broken.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
The way I test things locally I build in the directory above the subsurface
directory. Let's match this on GitHub as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
19.10 is no longer receiving updates and causing problems when running
the tests. 20.04 also uses Qt 5.12.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
This of course needs to be fixed in the build container itself, but
for now this might be enough to make progress.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
In order to apply the patches for Kirigami, git insists on having
a valid user name and email.
Also, don't build the mobile app when preparing the AppImage. That
build already takes way too long and we test this in a few other
actions.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It's frustrating that I can't get the translation.qrc support the translation
files to be created in the build directory. Having them as part of the sources
just feels wrong.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It is completely incomprehensible why these fail. And why randomly restarting
sometimes fixes them, and often doesn't. At this point there is no incremental
value in having this test. If it were to ever catch a real bug, we wouldn't
realize it because we are too well trained to ignore the problem.
Very disappointing, but IMHO the right thing to do.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
It appears that Xcode 12 applies some rather self defeating logic when picking
build architectures in release builds for the simulator. It adds aarch64 by
default and I can't find a way to turn that off from the command line. At the
same time, you can't link against the simulator if you have build with aarch64
as the aarch64 simulator doesn't exist, yet.
Since I couldn't get any of the claimed workarounds to work, I'm forcing Xcode
11 to be used in the Action.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
We previously tried to build the MXE Docker container on GitHub using
an Action, but that really didn't work well and was a lot more trouble
than it was worth.
So this goes back to an offline build mechanism where I simply create
an updated Docker image when needed and push that to Docker Hub.
But this nearly hides the most interesting change here - we are finally
switching to using 64bit binaries on Windows. It's 2020 and fewer than
1% of our users use 32bit Windows machines. We'll need to expand this
to be able to have both a 32bit and a 64bit version of Subsurface for
Windows. But for now, this solves the problem for 99% of our users.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>