spack | A flexible package manager that supports multiple versions, configurations, platforms, and compilers | Build Tool library
kandi X-RAY | spack Summary
kandi X-RAY | spack Summary
Spack is a multi-platform package manager that builds and installs multiple versions and configurations of software. It works on Linux, macOS, and many supercomputers. Spack is non-destructive: installing a new version of a package does not break existing installations, so many configurations of the same package can coexist. Spack offers a simple "spec" syntax that allows users to specify versions and configuration options. Package files are written in pure Python, and specs allow package authors to write a single script for many different builds of the same package. With Spack, you can build your software all the ways you want to. See the Feature Overview for examples and highlights.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Generate gitlab - ci_ci_cii .
- Executes ci rebuild .
- Parses the given argument strings for known arguments .
- Write the buildfile configuration .
- Create a host configuration file .
- Parse the version number from the given path .
- Create the host configuration file .
- Replies a pipeline yaml file .
- Builds a tar archive
- Checks the installation of VTK - M .
spack Key Features
spack Examples and Code Snippets
$ module purge
$ module load gcc/8.3.0
$ module load python/3.5.1
$ git clone https://github.com/spack/spack.git
$ source spack/share/spack/setup-env.sh
$ spack compiler find
==> Added 2 new compilers to /home//.spack/linux/compilers.yaml
gc
echo 'export BIO_SOFTWARES_DB_ACTIVE="~/.bioshiny/info.yaml" >> ~/.bashrc'
echo 'export BIOSHINY_CONFIG="~/.bioshiny/shiny.config.yaml" >> ~/.bashrc'
. ~/.bashrc
# Start the standalone Shiny application
wget https://raw.githubusercontent
$ git clone https://github.com/julea-io/julea.git
$ cd julea
$ ./scripts/install-dependencies.sh
$ . scripts/environment.sh
$ meson setup --prefix="${HOME}/julea-install" -Db_sanitize=address,undefined bld
$ ninja -C bld
$ julea-config --user \
Community Discussions
Trending Discussions on spack
QUESTION
I am moving a lot of old scripts used to configure a computer room into ansible, and it really has improved the workflow. Currently, there I have several playbooks, and I need to share a common config among them. But in one task I have faced a problem: I need a hostname/ip to be a variable in the inventory. I have read a lot of tutorials and docs and maybe I am dumb or very tired, but I have not found yet a solution after many hours, it seems that it is not possible. Dynamics inventories, group_vars and so on look similar but actually are different from what I require here. I have created a mwe to easy showing the case. This mwe is a subset but the main idea remains: vars inside vars/main.yml are going to be shared among various playbooks (easy) and inventories (the question here). Thanks in advance.
- ansible.cfg:
ANSWER
Answered 2022-Mar-01 at 08:26Use the module add_host and create new group package_server in the first play. Then use it in the second play. For example
QUESTION
I am using a custom package manager called spack, which allows me to load installed modules using the spack load
command. It is similar to the familiar module load
command in many ways. I am using zsh.
I have set up a shell script with a function that I would later like to insert into my .zshrc file. It is currently located in a standalone file for testing purposes, which looks as following:
...ANSWER
Answered 2022-Feb-21 at 11:20I'm not familiar with spack, but likely spack
is a shell function which modifies the current shell environment. That is how module
works. type spack
to check.
You can't modify the shell environment from a script, you can from a shell function.
Copy and paste the function load-standard
to "$ZDOTDIR/.zshrc"
(for current user, /etc/zshrc
for all users), source .zshrc
(. "$ZDOTDIR/.zshrc"
) and you should be fine (no need to restart).
You can also create a list of functions in a file, and add . /path/to/functions
to zshrc, to source it.
QUESTION
I am writing a python program that establishes a ssh connection to a server. For this I am using fabric (fabfile.org).
When I connect to the server via ssh in a terminal, I get my $PATHs set. When I connect to the server via fabric in my python program, $PATHs are missing...
- Where does bash load the $PATHs when I connect via terminal?
- How do I manage that fabric does the same?
Thanks in advance!
edit:
this is what I get, when I run echo -e ${PATH//:/\\n}
:
SSH via Terminal:
...ANSWER
Answered 2021-Nov-27 at 01:24I found the solution:
I got to run source /etc/profile
with fabric in order to get my correct PATHs.
Found out by reading: https://www.gnu.org/software/bash/manual/bash.html#Bash-Startup-Files
QUESTION
I understand that using the -ffast-math
flag allows for unsafe math operations and disables signalling NaNs. However, I expected the functions isnan()
and isinf()
to still be able to return the correct results, which they do not.
Here's an example:
File test_isnan.c
:
ANSWER
Answered 2021-Oct-06 at 10:24-ffast-math
Sets the options ... -ffinite-math-only ...
-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
Compiler optimizes the code to:
QUESTION
My question is about hpcviewer
which is a tool to visualize trace data generated
by hpcrun.
I succeeded to install hpctoolkit
but I have a problem finding hpcviewer
.
To test the toolkit, I created a simple hello_world program in C (with OpenMP
) and executed the following block of commands as shown in https://wiki.mpich.org/mpich/index.php/HPCToolkit_by_example:
ANSWER
Answered 2021-Jun-11 at 08:31The hpcviewer command is not found because the hpctoolkit has not been loaded.
You should execute the following command before asking for hpcviewer to visualize trace data generated by hpcrun:
QUESTION
I am trying to install hpctoolkit
using Spack
. In order to do that, I executed :
ANSWER
Answered 2021-Jun-10 at 11:42In order to fix this error, you should precise the path to g++. In my case, here is the updated content of my compilers.yaml file:
QUESTION
I am trying to install hpctoolkit
using spack
. In order to do that, I executed :
ANSWER
Answered 2021-Jun-09 at 12:34As you can see in the error, compiler 'gcc@10.2.0' does not support compiling C++ programs.
In order to display the compilers, use the command:
QUESTION
I am trying to install hpctoolkit using spack. In order to do that, I executed :
...ANSWER
Answered 2021-Jun-09 at 12:13Try changing lcompilers
to compilers
. It's just a typo error.
QUESTION
I'd like to learn how to write PNG images pixel-by-pixel using both RGB and HSV color models with C++. I read that this should be fairly easy using PNGwriter (https://github.com/pngwriter/pngwriter), but I've spent many hours struggling with installing it (on Ubuntu) and compiling my code with it. Any help would be much appreciated.
Disclaimer: I have a weird background in the sense that I have many years of experience in using Unix-like operating systems, doing stuff in the terminal, and writing code, but I know little/nothing about installing software from the source code or compiling programs manually or with makefiles from multiple source code files.
The installation instructions on GitHub advise to do one of the following:
Spack:
...
ANSWER
Answered 2021-Jan-09 at 16:23Thanks to john, I think I got it figured out. My guess is that the installation from the source (after the Spack installation) messed things up somehow. I reinstalled PNGwriter using Spack and, now apparently having all the pieces for the compilation command, was finally able to compile the example code.
Summary:
Source Spack
QUESTION
I want to execute a Popper workflow on a Linux HPC (High-performance computing) cluster. I don’t have admin/sudo rights. I know that I should use Singularity instead of Docker because Singularity is designed to not need sudo
to run.
However, singularity build
needs sudo
privileges, if not executed in fakeroot/rootless mode.
This is what I have done in the HPC login node:
- I installed Spack (0.15.4) and Singularity (3.6.1):
ANSWER
Answered 2020-Sep-08 at 10:13For an image from Docker Hub: How do I enable “user namespace”?
I found that the user namespace feature needs to be already enabled on the host machine. Here are instructions for checking whether it’s enabled.
In the case of the cluster computer I am using (Frankfurt Goethe HLR), user namespaces are only enabled in the computation nodes, not the login node. That’s why it didn’t work for me.
So I need to send the job with SLURM (here only the first step with a container from Docker Hub):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spack
You can use spack like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page