kandi background
Explore Kits

h5py | h5py package is a Pythonic interface | Dataset library

 by   h5py Python Version: 3.6.0 License: BSD-3-Clause

 by   h5py Python Version: 3.6.0 License: BSD-3-Clause

Download this library from

kandi X-RAY | h5py Summary

h5py is a Python library typically used in Artificial Intelligence, Dataset, Numpy applications. h5py has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can download it from GitHub.
HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • h5py has a highly active ecosystem.
  • It has 1652 star(s) with 433 fork(s). There are 51 watchers for this library.
  • There were 3 major release(s) in the last 12 months.
  • There are 208 open issues and 1056 have been closed. On average issues are closed in 60 days. There are 21 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of h5py is 3.6.0
h5py Support
Best in #Dataset
Average in #Dataset
h5py Support
Best in #Dataset
Average in #Dataset

quality kandi Quality

  • h5py has 0 bugs and 0 code smells.
h5py Quality
Best in #Dataset
Average in #Dataset
h5py Quality
Best in #Dataset
Average in #Dataset

securitySecurity

  • h5py has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • h5py code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
h5py Security
Best in #Dataset
Average in #Dataset
h5py Security
Best in #Dataset
Average in #Dataset

license License

  • h5py is licensed under the BSD-3-Clause License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
h5py License
Best in #Dataset
Average in #Dataset
h5py License
Best in #Dataset
Average in #Dataset

buildReuse

  • h5py releases are available to install and integrate.
  • Build file is available. You can build the component from source.
  • h5py saves you 4763 person hours of effort in developing the same functionality from scratch.
  • It has 10503 lines of code, 1191 functions and 88 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
h5py Reuse
Best in #Dataset
Average in #Dataset
h5py Reuse
Best in #Dataset
Average in #Dataset
Top functions reviewed by kandi - BETA

kandi has reviewed h5py and discovered the below as its top functions. This is intended to give you an instant insight into h5py implemented functionality, and help decide if they suit your requirements.

  • fill a dpl list with a given shape
  • Create a new ddarray
  • Retrieves an item from a structured data source .
  • Creates a new array with the specified shape data .
  • Returns the shape of the datapace .
  • Creates a dataset .
  • Apply a selection to a particular shape
  • Find the hdf5 compiler settings .
  • Run Cython .
  • Create a file ID for a file .

h5py Key Features

HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.

Build an adjacency matrix from distances between cosmological objects

copy iconCopydownload iconDownload
from sklearn.neighbors import radius_neighbors_graph

# Your example data in runnable format
dx = np.array([2.63370612e-01, 3.48350511e-01, -1.23379511e-02, 
               6.63767411e+00, 1.32910697e+01,  8.75469902e+00])
dy = np.array([0.33889825,  0.21808108,   0.50170807, 
               8.95542985, -9.84251952, -16.38661054])
dz = np.array([-0.26469788,  -0.10382767, -0.16625317, 
               -4.84708218, -13.77888398, 10.42730599])

# Build a coordinate matrix with columns x, y, z, with one star per row
X = np.column_stack([dx, dy, dz])

print(X)
[[ 2.63370612e-01  3.38898250e-01 -2.64697880e-01]
 [ 3.48350511e-01  2.18081080e-01 -1.03827670e-01]
 [-1.23379511e-02  5.01708070e-01 -1.66253170e-01]
 [ 6.63767411e+00  8.95542985e+00 -4.84708218e+00]
 [ 1.32910697e+01 -9.84251952e+00 -1.37788840e+01]
 [ 8.75469902e+00 -1.63866105e+01  1.04273060e+01]]

# Find the neighbours of each star, restricted to distance lower than radius=1.2
C = radius_neighbors_graph(X, 1.2)

# C is the connectivity matrix in Compressed Sparse Row (CSR) format. 
# For demonstration purposes, convert CSR matrix to dense representation 
# as a numpy matrix
C.todense()

matrix([[0., 1., 1., 0., 0., 0.],
        [1., 0., 1., 0., 0., 0.],
        [1., 1., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.]])

ERROR:Failed building wheel for h5pyFailed to build h5pyERROR:Could not build wheels for h5py,which is required toinstall pyproject.toml-basedprojects

copy iconCopydownload iconDownload
$ brew install hdf5
arch -arm64 brew install hdf5   
$ export HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.0_4 OR export HDF5_DIR=/opt/homebrew/opt/hdf5 (if hdf5 is installed in the "/opt/homebrew/opt/hdf5" location, you have to check it out first)
$ pip install --no-binary=h5py h5py
poetry install
poetry lock
-----------------------
$ brew install hdf5
arch -arm64 brew install hdf5   
$ export HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.0_4 OR export HDF5_DIR=/opt/homebrew/opt/hdf5 (if hdf5 is installed in the "/opt/homebrew/opt/hdf5" location, you have to check it out first)
$ pip install --no-binary=h5py h5py
poetry install
poetry lock
-----------------------
$ brew install hdf5
arch -arm64 brew install hdf5   
$ export HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.0_4 OR export HDF5_DIR=/opt/homebrew/opt/hdf5 (if hdf5 is installed in the "/opt/homebrew/opt/hdf5" location, you have to check it out first)
$ pip install --no-binary=h5py h5py
poetry install
poetry lock
-----------------------
$ brew install hdf5
arch -arm64 brew install hdf5   
$ export HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.0_4 OR export HDF5_DIR=/opt/homebrew/opt/hdf5 (if hdf5 is installed in the "/opt/homebrew/opt/hdf5" location, you have to check it out first)
$ pip install --no-binary=h5py h5py
poetry install
poetry lock
-----------------------
$ brew install hdf5
arch -arm64 brew install hdf5   
$ export HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.0_4 OR export HDF5_DIR=/opt/homebrew/opt/hdf5 (if hdf5 is installed in the "/opt/homebrew/opt/hdf5" location, you have to check it out first)
$ pip install --no-binary=h5py h5py
poetry install
poetry lock

Error while installing TensorFlow with pip

copy iconCopydownload iconDownload
# Remove the root prefix
rm -rf $(conda info --base)

# Configuration file, common between conda installations
rm ~/.condarc
# Environment locations and system information
rm -rf ~/.conda
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf
-----------------------
xcode-select --install
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
conda config --set auto_activate_base false
conda create --name conda_tf python=3.8
conda activate conda_tf
conda install -c apple tensorflow-deps
 pip install tensorflow-macos
conda deactivate
conda activate conda_tf

Colab: (0) UNIMPLEMENTED: DNN library is not found

copy iconCopydownload iconDownload
!pip install tensorflow==2.7.0
-----------------------
'tensorflow==2.7.0',
'tf-models-official==2.7.0',
'tensorflow_io==0.23.1',

'ModuleNotFoundError: No module named 'keras.engine.base_layer_v1'' when running PyInstaller .exe file

copy iconCopydownload iconDownload
ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.base_layer_v1'
hiddenimports=['tensorflow.python.keras.engine.base_layer_v1'],
--hidden-import=tensorflow.python.keras.engine.base_layer_v1
-----------------------
ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.base_layer_v1'
hiddenimports=['tensorflow.python.keras.engine.base_layer_v1'],
--hidden-import=tensorflow.python.keras.engine.base_layer_v1
-----------------------
ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.base_layer_v1'
hiddenimports=['tensorflow.python.keras.engine.base_layer_v1'],
--hidden-import=tensorflow.python.keras.engine.base_layer_v1

Cannot find conda info. Please verify your conda installation on EMR

copy iconCopydownload iconDownload
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh  -O /home/hadoop/miniconda.sh \
    && /bin/bash ~/miniconda.sh -b -p $HOME/conda

echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc


conda config --set always_yes yes --set changeps1 no
conda config -f --add channels conda-forge


conda create -n zoo python=3.7 # "zoo" is conda environment name
conda init bash
source activate zoo
conda install python 3.7.0 -c conda-forge orca 
sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
“spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
"spark.pyspark.virtualenv.enabled": "true",
"spark.pyspark.virtualenv.type":"native",
"spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
"zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
"zeppelin.python": "/home/hadoop/conda/bin/python"
-----------------------
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh  -O /home/hadoop/miniconda.sh \
    && /bin/bash ~/miniconda.sh -b -p $HOME/conda

echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc


conda config --set always_yes yes --set changeps1 no
conda config -f --add channels conda-forge


conda create -n zoo python=3.7 # "zoo" is conda environment name
conda init bash
source activate zoo
conda install python 3.7.0 -c conda-forge orca 
sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
“spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
"spark.pyspark.virtualenv.enabled": "true",
"spark.pyspark.virtualenv.type":"native",
"spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
"zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
"zeppelin.python": "/home/hadoop/conda/bin/python"

conda install and conda build result in different dependency versions

copy iconCopydownload iconDownload
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h6a2412b_1114']

hdf5 1.10.6 nompi_h6a2412b_1114
-------------------------------
file name   : hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
name        : hdf5
version     : 1.10.6
build       : nompi_h6a2412b_1114
build number: 1114
size        : 3.1 MB
license     : LicenseRef-HDF5
subdir      : linux-64
url         : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
md5         : 0a2984b78f51148d7ff6219abe73509e
timestamp   : 2021-01-08 23:10:11 UTC
dependencies: 
  - libcurl >=7.71.1,<8.0a0
  - libgcc-ng >=9.3.0
  - libgfortran-ng
  - libgfortran5 >=9.3.0
  - libstdcxx-ng >=9.3.0
  - openssl >=1.1.1i,<1.1.2a
  - zlib >=1.2.11,<1.3.0a0
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h3c11f04_101']

hdf5 1.10.6 nompi_h3c11f04_101
------------------------------
file name   : hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
name        : hdf5
version     : 1.10.6
build       : nompi_h3c11f04_101
build number: 101
size        : 3.0 MB
license     : HDF5
subdir      : linux-64
url         : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
md5         : 9f1ccc4d36edf8ea15ce19f52cf6d601
timestamp   : 2020-07-31 12:26:29 UTC
dependencies: 
  - libgcc-ng >=7.5.0
  - libgfortran-ng >=7,<8.0a0
  - libstdcxx-ng >=7.5.0
  - zlib >=1.2.11,<1.3.0a0
-----------------------
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h6a2412b_1114']

hdf5 1.10.6 nompi_h6a2412b_1114
-------------------------------
file name   : hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
name        : hdf5
version     : 1.10.6
build       : nompi_h6a2412b_1114
build number: 1114
size        : 3.1 MB
license     : LicenseRef-HDF5
subdir      : linux-64
url         : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
md5         : 0a2984b78f51148d7ff6219abe73509e
timestamp   : 2021-01-08 23:10:11 UTC
dependencies: 
  - libcurl >=7.71.1,<8.0a0
  - libgcc-ng >=9.3.0
  - libgfortran-ng
  - libgfortran5 >=9.3.0
  - libstdcxx-ng >=9.3.0
  - openssl >=1.1.1i,<1.1.2a
  - zlib >=1.2.11,<1.3.0a0
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h3c11f04_101']

hdf5 1.10.6 nompi_h3c11f04_101
------------------------------
file name   : hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
name        : hdf5
version     : 1.10.6
build       : nompi_h3c11f04_101
build number: 101
size        : 3.0 MB
license     : HDF5
subdir      : linux-64
url         : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
md5         : 9f1ccc4d36edf8ea15ce19f52cf6d601
timestamp   : 2020-07-31 12:26:29 UTC
dependencies: 
  - libgcc-ng >=7.5.0
  - libgfortran-ng >=7,<8.0a0
  - libstdcxx-ng >=7.5.0
  - zlib >=1.2.11,<1.3.0a0

How to solve no such node error in pytables and h5py

copy iconCopydownload iconDownload
sample = 'sample_5' 
with h5py.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
    if sample not in h5file['/train_group_0'].keys():
        print(f'Dataset Read Error: {sample} not found')
        return None, None
    else:
        node = h5file[f'/train_group_0/{sample}'] # <- this line breaks
        img = Image.fromarray(np.uint8(node))
        if 'TITLE' not in node.attrs.keys():
            print(f'Attribute Read Error: TITLE not found')
            return img, None
        else:
            target = node.attrs.get('TITLE').decode('utf-8')
            return img, int(target.strip())
import tables as tb
sample = 'sample_5'
with tb.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
    if sample not in h5file.get_node('/train_group_0'):
        print(f'Dataset Read Error: {sample} not found')
        return None, None
    else:
        node = h5file.get_node(f'/train_group_0/{sample}') # <- this line breaks
        img = Image.fromarray(np.uint8(node))
        if 'TITLE' not in node._v_attrs:
            print(f'Attribute Read Error: TITLE not found')
            return img, None
        else:
            target = node._v_attrs['TITLE'].decode('utf-8')
            return img, int(target.strip())
-----------------------
sample = 'sample_5' 
with h5py.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
    if sample not in h5file['/train_group_0'].keys():
        print(f'Dataset Read Error: {sample} not found')
        return None, None
    else:
        node = h5file[f'/train_group_0/{sample}'] # <- this line breaks
        img = Image.fromarray(np.uint8(node))
        if 'TITLE' not in node.attrs.keys():
            print(f'Attribute Read Error: TITLE not found')
            return img, None
        else:
            target = node.attrs.get('TITLE').decode('utf-8')
            return img, int(target.strip())
import tables as tb
sample = 'sample_5'
with tb.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
    if sample not in h5file.get_node('/train_group_0'):
        print(f'Dataset Read Error: {sample} not found')
        return None, None
    else:
        node = h5file.get_node(f'/train_group_0/{sample}') # <- this line breaks
        img = Image.fromarray(np.uint8(node))
        if 'TITLE' not in node._v_attrs:
            print(f'Attribute Read Error: TITLE not found')
            return img, None
        else:
            target = node._v_attrs['TITLE'].decode('utf-8')
            return img, int(target.strip())

Why do I have to call MPI.Finalize() inside the destructor?

copy iconCopydownload iconDownload
class DataGenerator:
    def __init__(self, filename, N, comm):
        self.comm = comm
        self.file = h5py.File(filename, 'w', driver="mpio", comm=comm)

        # Create datasets
        self.data_ds= self.file.create_dataset("indices", (N,1), dtype='i')

    def __enter__(self):
        return self

    def __exit__(self, type, value, traceback):
        self.file.close()


if __name__=='__main__':
    MPI.Init()
    world = MPI.COMM_WORLD
    world_rank = MPI.COMM_WORLD.rank

    filename = "test.hdf5"
    N = 10
    with DataGenerator(filename, N, comm=world) as data_gen:
        pass
    MPI.Finalize()

UnsatisfiableError on importing environment pywin32==300 (Requested package -&gt; Available versions)

copy iconCopydownload iconDownload
name: restoredEnv
channels:
  - anaconda
  - conda-forge
  - defaults
dependencies:
  - _anaconda_depends=2020.07
  - _ipyw_jlab_nb_ext_conf=0.1.0
  - _tflow_select=2.3.0=eigen
  - aiohttp=3.7.4
  - alabaster=0.7.12
  - anaconda=custom
  - anaconda-client=1.7.2
  - anaconda-navigator=2.0.3
  - anaconda-project=0.9.1
  - anyio=2.2.0
  - appdirs=1.4.4
  - argh=0.26.2
  - argon2-cffi=20.1.0
  - arrow=1.0.3
  - asn1crypto=1.4.0
  - astor=0.8.1
  - astroid=2.5.2
  - astropy=4.2.1
  - astunparse=1.6.3
  - async-timeout=3.0.1
  - async_generator=1.10
  - atomicwrites=1.4.0
  - attrs=20.3.0
  - autopep8=1.5.6
  - babel=2.9.0
  - backcall=0.2.0
  - backports=1.0
  - backports.functools_lru_cache=1.6.3
  - backports.shutil_get_terminal_size=1.0.0
  - backports.tempfile=1.0
  - backports.weakref=1.0.post1
  - bcrypt=3.2.0
  - beautifulsoup4=4.9.3
  - binaryornot=0.4.4
  - bitarray=1.9.1
  - bkcharts=0.2
  - black=20.8b1
  - blas=1.0=mkl
  - bleach=3.3.0
  - blinker=1.4
  - blosc=1.21.0
  - bokeh=2.3.0
  - boto=2.49.0
  - bottleneck=1.3.2
  - brotli=1.0.9
  - brotlipy=0.7.0
  - bzip2=1.0.8
  - ca-certificates=2020.10.14=0
  - cached-property=1.5.2
  - cached_property=1.5.2
  - certifi=2020.6.20
  - cffi=1.14.5
  - chardet=4.0.0
  - charls=2.2.0
  - click=7.1.2
  - cloudpickle=1.6.0
  - clyent=1.2.2
  - colorama=0.4.4
  - comtypes=1.1.9
  - conda=4.10.1
  - conda-build=3.18.11
  - conda-content-trust=0.1.1
  - conda-env=2.6.0=1
  - conda-package-handling=1.7.2
  - conda-repo-cli=1.0.4
  - conda-token=0.3.0
  - conda-verify=3.4.2
  - console_shortcut=0.1.1=4
  - contextlib2=0.6.0.post1
  - cookiecutter=1.7.2
  - cryptography=3.4.7
  - curl=7.76.0
  - cycler=0.10.0
  - cython=0.29.22
  - cytoolz=0.11.0
  - dask=2021.4.0
  - dask-core=2021.4.0
  - dataclasses=0.8
  - decorator=4.4.2
  - defusedxml=0.7.1
  - diff-match-patch=20200713
  - distributed=2021.4.0
  - docutils=0.17
  - entrypoints=0.3
  - et_xmlfile=1.0.1
  - fastcache=1.1.0
  - filelock=3.0.12
  - flake8=3.9.0
  - flask=1.1.2
  - freetype=2.10.4
  - fsspec=0.9.0
  - future=0.18.2
  - get_terminal_size=1.0.0
  - gevent=21.1.2
  - giflib=5.2.1
  - glew=2.1.0
  - glob2=0.7
  - gmpy2=2.1.0b1
  - google-pasta=0.2.0
  - greenlet=1.0.0
  - h5py=2.10.0
  - hdf5=1.10.6
  - heapdict=1.0.1
  - html5lib=1.1
  - icc_rt=2019.0.0
  - icu=68.1
  - idna=2.10
  - imagecodecs=2021.3.31
  - imageio=2.9.0
  - imagesize=1.2.0
  - importlib-metadata=3.10.0
  - importlib_metadata=3.10.0
  - inflection=0.5.1
  - iniconfig=1.1.1
  - intel-openmp=2021.2.0
  - intervaltree=3.0.2
  - ipykernel=5.5.3
  - ipython=7.22.0
  - ipython_genutils=0.2.0
  - ipywidgets=7.6.3
  - isort=5.8.0
  - itsdangerous=1.1.0
  - jdcal=1.4.1
  - jedi=0.17.2
  - jinja2=2.11.3
  - jinja2-time=0.2.0
  - joblib=1.0.1
  - jpeg=9d
  - json5=0.9.5
  - jsonschema=3.2.0
  - jupyter=1.0.0
  - jupyter-packaging=0.7.12
  - jupyter_client=6.1.12
  - jupyter_console=6.4.0
  - jupyter_core=4.7.1
  - jupyter_server=1.5.1
  - jupyterlab=3.0.12
  - jupyterlab_pygments=0.1.2
  - jupyterlab_server=2.4.0
  - jupyterlab_widgets=1.0.0
  - jxrlib=1.1
  - keras-applications=1.0.8
  - keras-preprocessing=1.1.2
  - keyring=23.0.1
  - kivy=2.0.0
  - kiwisolver=1.3.1
  - krb5=1.17.2
  - lazy-object-proxy=1.6.0
  - lcms2=2.12
  - lerc=2.2.1
  - libaec=1.0.4
  - libarchive=3.5.1
  - libblas=3.9.0=8_mkl
  - libcblas=3.9.0=8_mkl
  - libclang=11.1.0
  - libcurl=7.76.0
  - libdeflate=1.7
  - libiconv=1.16
  - liblapack=3.9.0=8_mkl
  - liblief=0.10.1
  - libllvm9=9.0.1
  - libpng=1.6.37
  - libprotobuf=3.16.0
  - libsodium=1.0.18
  - libspatialindex=1.9.3
  - libssh2=1.9.0
  - libtiff=4.2.0
  - libuv=1.39.0
  - libwebp-base=1.2.0
  - libxml2=2.9.10
  - libxslt=1.1.33
  - libzopfli=1.0.3
  - llvmlite=0.36.0
  - locket=0.2.0
  - lxml=4.6.3
  - lz4-c=1.9.3
  - lzo=2.10
  - m2w64-gcc-libgfortran=5.3.0=6
  - m2w64-gcc-libs=5.3.0=7
  - m2w64-gcc-libs-core=5.3.0=7
  - m2w64-gmp=6.1.0=2
  - m2w64-libwinpthread-git=5.0.0.4634.697f757=2
  - markdown=3.3.4
  - markupsafe=1.1.1
  - matplotlib=3.4.1
  - matplotlib-base=3.4.1
  - mccabe=0.6.1
  - menuinst=1.4.16
  - mistune=0.8.4
  - mkl=2020.4
  - mkl-service=2.3.0
  - mkl_fft=1.3.0
  - mkl_random=1.2.0
  - mock=4.0.3
  - more-itertools=8.7.0
  - mpc=1.1.0
  - mpfr=4.0.2
  - mpir=3.0.0
  - mpmath=1.2.1
  - msgpack-python=1.0.2
  - msys2-conda-epoch=20160418=1
  - multidict=5.1.0
  - multipledispatch=0.6.0
  - mypy_extensions=0.4.3
  - navigator-updater=0.2.1
  - nbclassic=0.2.6
  - nbclient=0.5.3
  - nbconvert=6.0.7
  - nbformat=5.1.3
  - nest-asyncio=1.5.1
  - networkx=2.5.1
  - nltk=3.6
  - nose=1.3.7
  - notebook=6.3.0
  - numba=0.53.1
  - numexpr=2.7.3
  - numpy=1.20.2
  - numpy-base=1.18.5
  - numpydoc=1.1.0
  - olefile=0.46
  - openjpeg=2.4.0
  - openpyxl=3.0.7
  - openssl=1.1.1k
  - opt_einsum=3.3.0
  - packaging=20.9
  - pandas=1.2.3
  - pandoc=2.13
  - pandocfilters=1.4.2
  - paramiko=2.7.2
  - parso=0.7.0
  - partd=1.1.0
  - path=15.1.2
  - path.py=12.5.0=0
  - pathlib2=2.3.5
  - pathspec=0.8.1
  - pathtools=0.1.2
  - patsy=0.5.1
  - pep8=1.7.1
  - pexpect=4.8.0
  - pickleshare=0.7.5
  - pillow=8.1.2
  - pip=21.0.1
  - pkginfo=1.7.0
  - pluggy=0.13.1
  - ply=3.11
  - pooch=1.3.0
  - powershell_shortcut=0.0.1=3
  - poyo=0.5.0
  - prometheus_client=0.10.0
  - prompt-toolkit=3.0.18
  - prompt_toolkit=3.0.18
  - psutil=5.8.0
  - ptyprocess=0.7.0
  - py=1.10.0
  - py-lief=0.10.1
  - pyasn1=0.4.8
  - pycodestyle=2.6.0
  - pycosat=0.6.3
  - pycparser=2.20
  - pycurl=7.43.0.6
  - pydocstyle=6.0.0
  - pyerfa=1.7.2
  - pyfirmata=1.1.0
  - pyflakes=2.2.0
  - pygments=2.8.1
  - pyjwt=2.1.0
  - pylint=2.7.2
  - pyls-black=0.4.6
  - pyls-spyder=0.3.2
  - pynacl=1.4.0
  - pyodbc=4.0.30
  - pyopenssl=20.0.1
  - pyparsing=2.4.7
  - pyqt=5.12.3
  - pyqt-impl=5.12.3
  - pyqt5-sip=4.19.18
  - pyqtchart=5.12
  - pyqtwebengine=5.12.1
  - pyreadline=2.1
  - pyrsistent=0.17.3
  - pyserial=3.4
  - pysocks=1.7.1
  - pytables=3.6.1
  - pytest=6.2.3
  - python=3.8.3
  - python-dateutil=2.8.1
  - python-jsonrpc-server=0.4.0
  - python-language-server=0.36.2
  - python-libarchive-c=2.9
  - python-slugify=4.0.1
  - python_abi=3.8=1_cp38
  - pytz=2021.1
  - pywavelets=1.1.1
  - pywin32=300
  - pywin32-ctypes=0.2.0
  - pywinpty=0.5.7
  - pyyaml=5.4.1
  - pyzmq=22.0.3
  - qdarkstyle=3.0.2
  - qstylizer=0.1.10
  - qt=5.12.9
  - qtawesome=1.0.2
  - qtconsole=5.0.3
  - qtpy=1.9.0
  - regex=2021.4.4
  - requests=2.25.1
  - requests-oauthlib=1.3.0
  - rope=0.18.0
  - rsa=4.7.2
  - rtree=0.9.7
  - ruamel_yaml=0.15.80
  - scikit-image=0.18.1
  - scipy=1.6.2
  - sdl2=2.0.12
  - sdl2_image=2.0.5
  - sdl2_mixer=2.0.4
  - sdl2_ttf=2.0.15
  - seaborn=0.11.1
  - seaborn-base=0.11.1
  - send2trash=1.5.0
  - setuptools=49.6.0
  - simplegeneric=0.8.1
  - singledispatch=3.6.1
  - sip=4.19.25
  - six=1.15.0
  - smpeg2=2.0.0
  - snappy=1.1.8
  - sniffio=1.2.0
  - snowballstemmer=2.1.0
  - sortedcollections=2.1.0
  - sortedcontainers=2.3.0
  - soupsieve=2.0.1
  - sphinx=3.5.3
  - sphinxcontrib=1.0
  - sphinxcontrib-applehelp=1.0.2
  - sphinxcontrib-devhelp=1.0.2
  - sphinxcontrib-htmlhelp=1.0.3
  - sphinxcontrib-jsmath=1.0.1
  - sphinxcontrib-qthelp=1.0.3
  - sphinxcontrib-serializinghtml=1.1.4
  - sphinxcontrib-websupport=1.2.4
  - spyder=5.0.0
  - spyder-kernels=2.0.1
  - sqlalchemy=1.4.6
  - sqlite=3.35.4
  - statsmodels=0.12.2
  - sympy=1.7.1
  - tbb=2020.2
  - tblib=1.7.0
  - tensorboard=2.4.1
  - tensorboard-plugin-wit=1.8.0
  - tensorflow-base=2.3.0
  - tensorflow-estimator=2.4.0
  - terminado=0.9.4
  - testpath=0.4.4
  - text-unidecode=1.3
  - textdistance=4.2.1
  - threadpoolctl=2.1.0
  - three-merge=0.1.1
  - tifffile=2021.3.31
  - tinycss=0.4
  - tk=8.6.10
  - toml=0.10.2
  - toolz=0.11.1
  - tornado=6.1
  - tqdm=4.60.0
  - traitlets=5.0.5
  - typed-ast=1.4.2
  - typing-extensions=3.7.4.3=0
  - typing_extensions=3.7.4.3
  - ujson=4.0.2
  - unicodecsv=0.14.1
  - unidecode=1.2.0
  - urllib3=1.26.4
  - vc=14.2
  - vs2015_runtime=14.28.29325
  - watchdog=1.0.2
  - wcwidth=0.2.5
  - webencodings=0.5.1
  - werkzeug=1.0.1
  - wheel=0.36.2
  - whichcraft=0.6.1
  - widgetsnbextension=3.5.1
  - win_inet_pton=1.1.0
  - win_unicode_console=0.5
  - wincertstore=0.2
  - winpty=0.4.3=4
  - wrapt=1.12.1
  - xlrd=2.0.1
  - xlsxwriter=1.3.8
  - xlwings=0.23.0
  - xlwt=1.3.0
  - xmltodict=0.12.0
  - xz=5.2.5
  - yaml=0.2.5
  - yapf=0.30.0
  - yarl=1.6.3
  - zeromq=4.3.4
  - zfp=0.5.5
  - zict=2.0.0
  - zipp=3.4.1
  - zlib=1.2.11
  - zope=1.0
  - zope.event=4.5.0
  - zope.interface=5.3.0
  - zstd=1.4.9
  - pip:
    - absl-py==0.11.0
    - bs4==0.0.1
    - cachetools==4.2.1
    - cssselect==1.1.0
    - fake-useragent==0.1.11
    - feedparser==6.0.2
    - flatbuffers==1.12
    - gast==0.3.3
    - google-auth==1.27.1
    - google-auth-oauthlib==0.4.3
    - grpcio==1.32.0
    - oauthlib==3.1.0
    - opencv-python==4.5.1.48
    - parse==1.19.0
    - protobuf==3.15.5
    - pyarduino==0.2.2
    - pyasn1-modules==0.2.8
    - pyee==8.1.0
    - pymysql==0.10.1
    - pyppeteer==0.2.5
    - pyquery==1.4.3
    - requests-html==0.10.0
    - scikit-learn==0.22.2.post1
    - sgmllib3k==1.0.0
    - tensorflow==2.4.1
    - termcolor==1.1.0
    - w3lib==1.22.0
    - websockets==8.1
    - yahoo-fin==0.8.8

Community Discussions

Trending Discussions on h5py
  • Build an adjacency matrix from distances between cosmological objects
  • ERROR:Failed building wheel for h5pyFailed to build h5pyERROR:Could not build wheels for h5py,which is required toinstall pyproject.toml-basedprojects
  • Error while installing TensorFlow with pip
  • Colab: (0) UNIMPLEMENTED: DNN library is not found
  • 'ModuleNotFoundError: No module named 'keras.engine.base_layer_v1'' when running PyInstaller .exe file
  • AWS Elastic Beanstalk - Failing to install requirements.txt on deployment
  • Cannot find conda info. Please verify your conda installation on EMR
  • conda install and conda build result in different dependency versions
  • How to solve no such node error in pytables and h5py
  • Why do I have to call MPI.Finalize() inside the destructor?
Trending Discussions on h5py

QUESTION

Build an adjacency matrix from distances between cosmological objects

Asked 2022-Apr-11 at 03:17

I'm probing into the Illustris API, and gathering information from a specific cosmos simulation, for a given redshift value.

This is how I request the api:

import requests

baseUrl = 'http://www.tng-project.org/api/'
    
def get(path, params=None):
    # make HTTP GET request to path
    headers = {"api-key":"my_key"}
    r = requests.get(path, params=params, headers=headers)

    # raise exception if response code is not HTTP SUCCESS (200)
    r.raise_for_status()

    if r.headers['content-type'] == 'application/json':
        return r.json() # parse json responses automatically
    
    if 'content-disposition' in r.headers:
        filename = r.headers['content-disposition'].split("filename=")[1]
        with open(f'sky_dataset/simulations/{filename}', 'wb') as f:
            f.write(r.content)
        return filename # return the filename string
    return r

And below I get the star coordinates for a given subhalo in this particular simulation. Note that -if I'm doing it right- distances have already been converted from ckpc/h to physical kpc.

Physical coordinates are the actual distances you would measure between them if you froze space and started laying out measuring rods:

import h5py
import numpy as np

simulation_id = 100
redshift = 0.57
subhalo_id = 99

scale_factor = 1.0 / (1+redshift)
little_h = 0.704

params = {'stars':'Coordinates,GFM_Metallicity'}

url = "http://www.tng-project.org/api/Illustris-1/snapshots/z=" + str(redshift) + "/subhalos/" + str(subhalo_id)
sub = get(url) # get json response of subhalo properties
saved_filename = get(url + "/cutout.hdf5",params) # get and save HDF5 cutout file

with h5py.File(f'sky_dataset/simulations/{saved_filename}') as f:
    # NOTE! If the subhalo is near the edge of the box, you must take the periodic boundary into account! (we ignore it here)
    dx = f['PartType4']['Coordinates'][:,0] - sub['pos_x']
    dy = f['PartType4']['Coordinates'][:,1] - sub['pos_y']
    dz = f['PartType4']['Coordinates'][:,2] - sub['pos_z']
    
    rr = np.sqrt(dx**2 + dy**2 + dz**2)
    rr *= scale_factor/little_h # ckpc/h -> physical kpc

    fig = plt.figure(figsize=(12,12))
    with mpl.rc_context(rc={'axes3d.grid': True}):
        ax = fig.add_subplot(projection='3d')

        # Plot the values
        ax.scatter(dx, dy, dz)
        ax.set_xlabel('X-axis')
        ax.set_ylabel('Y-axis')
        ax.set_zlabel('Z-axis')
    plt.show()

The above plots:

enter image description here

as requested by one comment, I print dy, dy, dz truncated examples:

dx = [ 2.63370612e-01  3.48350511e-01 -1.23379511e-02 ...  6.63767411e+00
  1.32910697e+01  8.75469902e+00]

dy = [  0.33889825   0.21808108   0.50170807 ...   8.95542985  -9.84251952
 -16.38661054]

dz = [ -0.26469788  -0.10382767  -0.16625317 ...  -4.84708218 -13.77888398
  10.42730599]

My aim is to build a connectivity network for this system, starting with an square (simetrical) adjacency matrix, whereby any two stars (or vertices) are connected if they lie within the linking length l of 1.2 Mpc, that is:

Aij = 1 if rij ≤ l, otherwise 0

where rij is the distance between the two vertices, i and j.

How can I get this adjacency matrix, based on my linking length?

ANSWER

Answered 2022-Apr-11 at 01:12

A solution using sklearn.neighbors.radius_neighbors_graph and your example data:

from sklearn.neighbors import radius_neighbors_graph

# Your example data in runnable format
dx = np.array([2.63370612e-01, 3.48350511e-01, -1.23379511e-02, 
               6.63767411e+00, 1.32910697e+01,  8.75469902e+00])
dy = np.array([0.33889825,  0.21808108,   0.50170807, 
               8.95542985, -9.84251952, -16.38661054])
dz = np.array([-0.26469788,  -0.10382767, -0.16625317, 
               -4.84708218, -13.77888398, 10.42730599])

# Build a coordinate matrix with columns x, y, z, with one star per row
X = np.column_stack([dx, dy, dz])

print(X)
[[ 2.63370612e-01  3.38898250e-01 -2.64697880e-01]
 [ 3.48350511e-01  2.18081080e-01 -1.03827670e-01]
 [-1.23379511e-02  5.01708070e-01 -1.66253170e-01]
 [ 6.63767411e+00  8.95542985e+00 -4.84708218e+00]
 [ 1.32910697e+01 -9.84251952e+00 -1.37788840e+01]
 [ 8.75469902e+00 -1.63866105e+01  1.04273060e+01]]

# Find the neighbours of each star, restricted to distance lower than radius=1.2
C = radius_neighbors_graph(X, 1.2)

# C is the connectivity matrix in Compressed Sparse Row (CSR) format. 
# For demonstration purposes, convert CSR matrix to dense representation 
# as a numpy matrix
C.todense()

matrix([[0., 1., 1., 0., 0., 0.],
        [1., 0., 1., 0., 0., 0.],
        [1., 1., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0.]])

For your example data of six stars, the connectivity matrix shows:

  • Star 0 (row 0) is within 1.2 distance units (kpc) of Stars 1 and 2
  • Stars 1 and 2 are within 1.2 kpc of each other

(You asked for a linking distance of 1.2 Mpc, which would correspond to radius=1200. For demo purposes, here I used radius=1.2, corresponding to 1.2 kpc, because all six stars are within 1.2 Mpc of each other, which would have resulted in a rather boring connectivity matrix.)

Source https://stackoverflow.com/questions/71820465

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install h5py

You can download it from GitHub.
You can use h5py like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with h5py
Try Top Libraries by h5py
Compare Dataset Libraries with Permissive License
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.