Size: 7423
Comment: initial draft
|
Size: 9345
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Under Construction = |
|
Line 7: | Line 4: |
= CMake = 1. Dependencies are from Anaconda. CMake runs conda to detect environment location and uses it to find packages. {{{#!wiki comment 1. Does not specify all source files, so the files will not be added to IDE project files and will not be available for editing. This makes it inconvenient to use Visual Studio as editor. Need ''add_custom_target ... [SOURCES src1 [src2...]]'' https://cmake.org/cmake/help/v3.15/command/add_custom_target.html }}} 1. Make targets can be listed with {{{make help}}}. Some convenience targets: {{{ $ make help The following are some of the valid targets for this Makefile: ... ..... ... ..... ... PythonFiles ... test-rt ... test-py-compile ... test-verbose-broken ... test-progs ... test-verbose ... ..... ... ..... }}} 1. ''libpython'' can be linked statically or dynamically when python is built. It is important for python extensions to be aware of the type of linking in order to avoid segfaults. (Need a little more details, refs?) 1. opengl ??? 1. cmake <source-dir> -DENABLE_WARNINGS=OFF {{{#!wiki comment 1. Modern cmake, targets as much as possible, interface libs(?), anaconda, find_packages in cmake,... }}} = Anaconda = Feedstocks, conda-smithy, https://anaconda.org/cryoem/ 1. ftgl 1. fftw-mpi 1. openmpi 1. pydusa and pydusa-feedstock 1. constructor and constructor-feedstock 1. eman-deps-feedstock, eman-deps-cli 1. eman-packaging-feedstock File structure, dependencies ? ABI compat 1. conda 1. conda-build 1. constructor = Continuous Integration = 1. !GitHub webhooks 1. Test source build and conda recipe build (conda-build). 1. Binary builds on local build machines. 1. Manually triggered build on master and non-master. 1. Cron builds. 1. Release binaries manually copied from cont. builds. 1. [[https://circleci.com/gh/cryoem/eman2|CircleCI]]: [[https://github.com/cryoem/eman2/blob/master/.circleci/config.yml|.circleci/config.yml]] 1. [[https://travis-ci.org/cryoem/eman2|TravisCI]]: [[https://github.com/cryoem/eman2/blob/master/.travis.yml|.travis.yml]] 1. [[https://ci.appveyor.com/|Appveyor]]: [[https://github.com/cryoem/eman2/blob/master/appveyor.yml|appveyor.yml]] 1. JenkinsCI: [[https://github.com/cryoem/eman2/blob/master/Jenkinsfile|Jenkinsfile]], binary builds = Jenkins Setup = 1. Triggers 1. GitHub webhooks 1. Cron 1. Binary build trigger |
= Binary Distribution = == Jenkins == 1. Login info: 10.10.11.176:8080/ username: eman2 1. Jobs * multi: Triggers job cryoem-eman2 on agents * cryoem-eman2: Test(?) and binary builds * eman-dev(?): Triggers new build of eman-dev 1. http://10.10.11.176:8080/job/build-binary/ 1. http://10.10.11.176:8080/job/build-binary-trigger/ 1. http://10.10.11.176:8080/job/cryoem-eman2-trigger/ 1. http://10.10.11.176:8080/job/cryoem-eman2/ 1. http://10.10.11.176:8080/job/eman-bump-version/ 1. http://10.10.11.176:8080/job/eman-feedstock-building-eman-v2.99/ 1. http://10.10.11.176:8080/job/eman-feedstock-trigger-from-eman-master/ 1. http://10.10.11.176:8080/job/eman-feedstock-update-version/ 1. Settings, Plugins ? 1. Nodes http://10.10.11.176:8080/computer/ 1. Binary builds require conda-build, constructor Packaging is done with {{{constructor}}}, a tool for making installers from conda packages. === Jenkins Setup === Jenkins run command(?) Server on Linux, agents on Linux, Mac and Windows === Jenkins Setup on Linux === Credentials PATH == Jenkins Setup == |
Line 81: | Line 46: |
1. Jenkins Docker image, docker-coompose or docker stack deploy | |
Line 84: | Line 48: |
1. plugins 1. config, jcasc, config.xml, users.xml, jobs/*.xml?, gpg encrypt 1. Agent nodes setup, agent nodes auto-start |
1. Agent nodes setup |
Line 88: | Line 50: |
1. Master only 1. Master and agent per machine 1. Single master and OS agents |
|
Line 94: | Line 53: |
1. systemctl | |
Line 100: | Line 58: |
cron: 0 0 * * * bash /home/eman2/workspace/cronjobs/cleanup_harddisk.sh $ cat Desktop/docker.txt docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts docker run -p 8080:8080 -p 50000:50000 --restart unless-stopped jenkins/jenkins:lts docker run -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home jenkins/jenkins:lts docker run -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /var/jenkins_home:/home/eman2/jenkins_home jenkins/jenkins:lts # Working docker run -u root -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home jenkins/jenkins:lts docker run -u root -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home jenkins docker run -d -u root -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:lts sudo docker run -it -v /var/jenkins_home:/home/eman2/jenkins_home jenkins startup: right-click ??? |
|
Line 116: | Line 61: |
1. plist docker run -d --name jenkins-master -p 8080:8080 -p 50000:50000 -v /Users/eman/workspace/jenkins_home:/var/jenkins_home --restart unless-stopped jenkins/jenkins:lts Auto startup: plist https://imega.club/2015/06/01/autostart-slave-jenkins-mac/ /Users/eman/Library/LaunchAgents |
|
Line 124: | Line 64: |
client 0 free swap space $ cat Desktop/docker.txt docker run -p 8080:8080 -v /Users/eman/workspace/jenkins_home:/var/jenkins_home jenkins docker run -it -p 8080:8080 -v /Users/eman/workspace/jenkins_home:/var/jenkins_home --restart unless-stopped jenkins # Working docker run -it -p 8080:8080 -v /Users/eman/workspace/jenkins_home:/var/jenkins_home --restart unless-stopped jenkins/jenkins:lts # Blue Ocean docker run \ . -u root \ --rm \ -d \ -p 8080:8080 \ -v jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean # Latest docker run -d --name jenkins-master -p 8080:8080 -p 50000:50000 -v /Users/eman/workspace/jenkins_home:/var/jenkins_home --restart unless-stopped jenkins/jenkins:lts FROM jenkins/jenkins:lts COPY plugins.txt /usr/share/jenkins/ref/plugins.txt RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt plugins.txt: ace-editor:latest bouncycastle-api:latest branch-api:latest chef-identity:latest Settings: tokens slaves |
|
Line 144: | Line 67: |
Move jenkins_home http://tech.nitoyon.com/en/blog/2014/02/25/jenkins-home-win/ Run as service: Open Task Manager(Ctrl+Shift+Esc), New task, Browse to agent.jnlp and run as admin does this work? This is when starting via Web Launcher doesn't work. currently, task scheduler works need to have miniconda pn path, set it during miniconda installation, but do not(?) register python. While installing miniconda register python and add to PATH. Then, conda init in cmd (git init cmd.exe) and git windows (git init bash). And, maybe restart??? BUG: miniconda3 conda-build=3.17.8 adds vc14 even if python2 is requested in build reqs = Distribution = == Binaries on cryoem.bcm.edu == == EMAN2 on anaconda.org == == EMAN2 Docker images == {{{#!wiki comment = Build System Components = digraph build_system_components { a [label = "Source code"] //=== CMake and Anaconda === b [label = "Binaries"] //=== Test and CI === d [label = "Test"] c [label = "Distribute"] c1 [label = "Package with Constructor"] c2 [label = "Conda recipe"] c3 [label = "Docker Image"] a -> b [ label = "CMake" ] b -> c b -> d -> c c -> c1 c -> c2 c -> c3 } {{attachment:build_system_components.png}} }}} |
OPENGL: https://github.com/conda/conda-recipes/blob/master/qt5/notes.md = Continuous Integration = 1. !GitHub webhooks are setup to send notifications to blake. Blake forwards those to three build machines, although only Linux is sufficient. Linux runs the server that drives the the Jenkins jobs. 1. Binary builds on local build machines. 1. Manually triggered by including "[ci build]" anywhere in the last commit message. Manually triggered builds on master branch are uploaded as [[https://cryoem.bcm.edu/downloads/view_eman2_version/25|continuous builds]] and builds triggered from any other branch are uploaded to [[https://cryoem.bcm.edu/downloads/view_eman2_version/26|testing area]]. 1. --(Triggered by cron builds daily.)-- 1. Any branch in the form of "release-" triggers continuous builds without having to include "[ci build]" in the commit message. Once the release branch is ready, release binaries are manually copied from cont. builds folder into the release folder on the server. 1. CI configurations files: 1. --([[https://circleci.com/gh/cryoem/eman2|CircleCI]]: [[https://github.com/cryoem/eman2/blob/master/.circleci/config.yml|.circleci/config.yml]])-- 1. --([[https://travis-ci.org/cryoem/eman2|TravisCI]]: [[https://github.com/cryoem/eman2/blob/master/.travis.yml|.travis.yml]])-- 1. --([[https://ci.appveyor.com/|Appveyor]]: [[https://github.com/cryoem/eman2/blob/master/appveyor.yml|appveyor.yml]])-- 1. JenkinsCI: [[https://github.com/cryoem/eman2/blob/master/Jenkinsfile|Jenkinsfile]] 1. Secrets like ssh keys are stored locally in Jenkins 1. Some env vars need to be set by agents: 1. HOME_DIR, DEPLOY_PATH, PATH+EXTRA (to add miniconda to PATH). 1. PATH+EXTRA is not set on win. (?) 1. Now, it is set on win, too. 1. {{{ Launch method: via SSH Advanced: Prefix Start Agent Command: "D: && " }}} 1. On windows for sh calls in jenkins to work "Git for Windows" might need to be installed. == Anaconda == Dependencies not available on anaconda or conda-forge are available [[https://anaconda.org/cryoem/|cryoem]]. The binaries are built and uploaded using [[https://conda-forge.org/|conda-forge's]] [[https://github.com/conda-forge/conda-smithy|conda-smithy]]. [[https://github.com/conda-forge/conda-smithy|conda-smithy]] takes care of generating feedstocks, registering them on !GitHub and online CI services and building conda recipes. EMAN2 is built with `conda-build` using binaries from https://anaconda.org, packaged into an installer with [[https://github.com/cryoem/constructor.git|constructor]] as of '''v2.2'''. 1. [[https://github.com/conda/conda|conda]] is the package manager. 1. https://anaconda.org is the online repository of binaries. 1. [[https://github.com/conda/conda-build|conda-build]] is the tool to build from source. 1. [[https://github.com/cryoem/constructor.git|constructor]] is the tool to package eman2 and dependency binaries into a single installer file. EMAN2 is distributed as a single installer which includes all its dependencies. == Conda == Packages that are available on https://anaconda.org can be installed into any conda environment by issuing the command {{{conda install <package>}}}. Conda installs the package along with its dependencies. In order for packages to benefit from this automation, they need to be packaged in a specific way. That can be done with {{{conda-build}}}. {{{conda-build}}} builds packages according to instructions provided in a {{{recipe}}}. A recipe consists of a file with package metadata, {{{meta.yaml}}}, and any other necessary resources like build scripts, ({{{build.sh}}}, {{{bld.bat}}}), patches and so on. == Recipes, Feedstocks and anaconda.org channel: cryoem == Most of EMAN2 dependencies can be found on anaconda's channels, {{{defaults}}} and {{{conda-forge}}}. A few that do not exist or need to be customized have been built and uploaded to channel [[https://anaconda.org/cryoem/dashboard|cryoem]]. The recipes are hosted in separate repositories on [[https://github.com/cryoem/|GitHub]]. Every recipe repository follows the feedstock approach of [[http://conda-forge.github.io/|conda-forge]]. See [[https://github.com/cryoem?utf8=%E2%9C%93&q=-feedstock&type=&language=|here]] for a complete list. == Feedstocks == * https://github.com/cryoem/eman-deps-feedstock * https://github.com/cryoem/pydusa-feedstock * https://github.com/cryoem/eman-feedstock === General instructions === 1. Existing feedstocks 1. Files to edit: recipe/, conda-build.yaml, conda-forge.yaml 1. conda create -n smithy conda-smithy -c conda-forge 1. conda-smithy rerender 1. More info in conda-smithy/README.md, conda smithy -h, conda-forge.org/docs 1. New feedstocks 1. conda-smithy/README.md, conda smithy -h === Conda-smithy Workflow === Conda smithy uses tokens to authenticate with !GitHub. {{{#!highlight yaml # conda-forge.yml channels: sources: [cryoem, defaults, conda-forge] targets: - [cryoem, main] github: user_or_org: cryoem repo_name: <package>-feedstock provider: linux: circle osx: travis win: appveyor azure: build_id: blank }}} {{{#!highlight yaml # recipe/conda_build_config.yaml channel_sources: - cryoem, defaults,conda-forge channel_targets: - cryoem dev }}} Conda-smithy commands: {{{#!highlight bash conda create -n smithy conda-smithy conda activate smithy conda smithy init <recipe_directory> conda smithy register-github <feedstock_directory> --organization cryoem conda smithy register-ci --organization cryoem --without-azure --without-drone conda smithy rerender --no-check-uptodate }}} = Build System Notes = == CMake == 1. ''libpython'' can be linked statically or dynamically when python is built. It is important for python extensions to be aware of the type of linking in order to avoid segfaults. This can be accomplished by querying Py_ENABLE_SHARED. {{{#!highlight bash python -c "import sysconfig; print(sysconfig.get_config_var('Py_ENABLE_SHARED'))" }}} In EMAN, it is done in [[https://github.com/cryoem/eman2/blob/master/cmake/FindPython.cmake#L29-L73|cmake/FindPython.cmake]] 1. OpenGL detection when Anaconda's compilers are used is done using a [[https://github.com/cryoem/eman2/blob/master/recipe/cross-linux.cmake|cmake toolchain file]]. 1. glext.h file needed for OpenGL related module compilation is already present on Linux and Mac. On Windows, it is manually copied once into C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include\gl. On Appveyor it is downloaded as part of env setup every time a test is run. ==== Docker ==== Docker images and helper scripts are at --(https://github.com/cryoem/docker-images )-- https://github.com/cryoem/build-scripts. Command to run docker with GUI support, CentOS7: {{{ xhost + local:root docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY cryoem/eman-nvidia-cuda8-centos7 # When done with eman xhost - local:root }}} :FIXME: Runs as root on Linux. `chown` doesn't work, the resulting installer has root ownership. |
Contents
Binary Distribution
Jenkins
- Login info: 10.10.11.176:8080/ username: eman2
- Jobs
- multi: Triggers job cryoem-eman2 on agents
- cryoem-eman2: Test(?) and binary builds
- eman-dev(?): Triggers new build of eman-dev
http://10.10.11.176:8080/job/eman-feedstock-building-eman-v2.99/
http://10.10.11.176:8080/job/eman-feedstock-trigger-from-eman-master/
- Settings, Plugins ?
- Binary builds require conda-build, constructor
Packaging is done with constructor, a tool for making installers from conda packages.
Jenkins Setup
Jenkins run command(?)
Server on Linux, agents on Linux, Mac and Windows
Jenkins Setup on Linux
Credentials
PATH
Jenkins Setup
- Jenkins master needs PATH prepended with $CONDA_PREFIX/bin
- docker-compose.yml at home dir in build machines
- Agent nodes setup
- Server and agent per machine vs single server and os agents
Linux
docker run -d -u root -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:lts &
docker run -d -u root --name jenkins-master -p 8080:8080 -p 50000:50000 --restart unless-stopped -v /home/eman2/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -e PLUGINS_FORCE_UPGRADE=true -e TRY_UPGRADE_IF_NO_MARKER=true --restart unless-stopped cryoem/jenkins:dev
Mac
slave clock sync https://blog.shameerc.com/2017/03/quick-tip-fixing-time-drift-issue-on-docker-for-mac docker run --rm --privileged alpine hwclock -s
Windows
OPENGL: https://github.com/conda/conda-recipes/blob/master/qt5/notes.md
Continuous Integration
GitHub webhooks are setup to send notifications to blake. Blake forwards those to three build machines, although only Linux is sufficient. Linux runs the server that drives the the Jenkins jobs.
- Binary builds on local build machines.
Manually triggered by including "[ci build]" anywhere in the last commit message. Manually triggered builds on master branch are uploaded as continuous builds and builds triggered from any other branch are uploaded to testing area.
Triggered by cron builds daily.
- Any branch in the form of "release-" triggers continuous builds without having to include "[ci build]" in the commit message. Once the release branch is ready, release binaries are manually copied from cont. builds folder into the release folder on the server.
- CI configurations files:
JenkinsCI: Jenkinsfile
- Secrets like ssh keys are stored locally in Jenkins
- Some env vars need to be set by agents:
- HOME_DIR, DEPLOY_PATH, PATH+EXTRA (to add miniconda to PATH).
- PATH+EXTRA is not set on win. (?)
- Now, it is set on win, too.
Launch method: via SSH Advanced: Prefix Start Agent Command: "D: && "
- On windows for sh calls in jenkins to work "Git for Windows" might need to be installed.
Anaconda
Dependencies not available on anaconda or conda-forge are available cryoem. The binaries are built and uploaded using conda-forge's conda-smithy. conda-smithy takes care of generating feedstocks, registering them on GitHub and online CI services and building conda recipes.
EMAN2 is built with conda-build using binaries from https://anaconda.org, packaged into an installer with constructor as of v2.2.
conda is the package manager.
https://anaconda.org is the online repository of binaries.
conda-build is the tool to build from source.
constructor is the tool to package eman2 and dependency binaries into a single installer file.
EMAN2 is distributed as a single installer which includes all its dependencies.
Conda
Packages that are available on https://anaconda.org can be installed into any conda environment by issuing the command conda install <package>. Conda installs the package along with its dependencies. In order for packages to benefit from this automation, they need to be packaged in a specific way. That can be done with conda-build. conda-build builds packages according to instructions provided in a recipe. A recipe consists of a file with package metadata, meta.yaml, and any other necessary resources like build scripts, (build.sh, bld.bat), patches and so on.
Recipes, Feedstocks and anaconda.org channel: cryoem
Most of EMAN2 dependencies can be found on anaconda's channels, defaults and conda-forge. A few that do not exist or need to be customized have been built and uploaded to channel cryoem. The recipes are hosted in separate repositories on GitHub. Every recipe repository follows the feedstock approach of conda-forge. See here for a complete list.
Feedstocks
General instructions
- Existing feedstocks
- Files to edit: recipe/, conda-build.yaml, conda-forge.yaml
- conda create -n smithy conda-smithy -c conda-forge
- conda-smithy rerender
- More info in conda-smithy/README.md, conda smithy -h, conda-forge.org/docs
- New feedstocks
- conda-smithy/README.md, conda smithy -h
Conda-smithy Workflow
Conda smithy uses tokens to authenticate with GitHub.
Conda-smithy commands:
1 conda create -n smithy conda-smithy 2 conda activate smithy 3 conda smithy init <recipe_directory> 4 conda smithy register-github <feedstock_directory> --organization cryoem 5 conda smithy register-ci --organization cryoem --without-azure --without-drone 6 conda smithy rerender --no-check-uptodate
Build System Notes
CMake
libpython can be linked statically or dynamically when python is built. It is important for python extensions to be aware of the type of linking in order to avoid segfaults. This can be accomplished by querying Py_ENABLE_SHARED.
1 python -c "import sysconfig; print(sysconfig.get_config_var('Py_ENABLE_SHARED'))"
In EMAN, it is done in cmake/FindPython.cmake
OpenGL detection when Anaconda's compilers are used is done using a cmake toolchain file.
- glext.h file needed for OpenGL related module compilation is already present on Linux and Mac. On Windows, it is manually copied once into C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include\gl. On Appveyor it is downloaded as part of env setup every time a test is run.
Docker
Docker images and helper scripts are at https://github.com/cryoem/docker-images https://github.com/cryoem/build-scripts.
Command to run docker with GUI support, CentOS7:
xhost + local:root docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY cryoem/eman-nvidia-cuda8-centos7 # When done with eman xhost - local:root
:FIXME: Runs as root on Linux. chown doesn't work, the resulting installer has root ownership.