kandi background
Explore Kits

devdocs | API Documentation Browser | Frontend Framework library

 by   freeCodeCamp Ruby Version: Current License: Non-SPDX

 by   freeCodeCamp Ruby Version: Current License: Non-SPDX

Download this library from

kandi X-RAY | devdocs Summary

devdocs is a Ruby library typically used in User Interface, Frontend Framework, React, Electron applications. devdocs has no bugs, it has no vulnerabilities and it has medium support. However devdocs has a Non-SPDX License. You can download it from GitHub.
DevDocs combines multiple developer documentations in a clean and organized web UI with instant search, offline support, mobile version, dark theme, keyboard shortcuts, and more. DevDocs was created by Thibaut Courouble and is operated by freeCodeCamp.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • devdocs has a medium active ecosystem.
  • It has 28474 star(s) with 1913 fork(s). There are 626 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 101 open issues and 886 have been closed. On average issues are closed in 129 days. There are 25 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of devdocs is current.
devdocs Support
Best in #Frontend Framework
Average in #Frontend Framework
devdocs Support
Best in #Frontend Framework
Average in #Frontend Framework

quality kandi Quality

  • devdocs has 0 bugs and 0 code smells.
devdocs Quality
Best in #Frontend Framework
Average in #Frontend Framework
devdocs Quality
Best in #Frontend Framework
Average in #Frontend Framework

securitySecurity

  • devdocs has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • devdocs code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
devdocs Security
Best in #Frontend Framework
Average in #Frontend Framework
devdocs Security
Best in #Frontend Framework
Average in #Frontend Framework

license License

  • devdocs has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
devdocs License
Best in #Frontend Framework
Average in #Frontend Framework
devdocs License
Best in #Frontend Framework
Average in #Frontend Framework

buildReuse

  • devdocs releases are not available. You will need to build from source code and install.
  • Installation instructions, examples and code snippets are available.
  • It has 36196 lines of code, 1558 functions and 834 files.
  • It has low code complexity. Code complexity directly impacts maintainability of the code.
devdocs Reuse
Best in #Frontend Framework
Average in #Frontend Framework
devdocs Reuse
Best in #Frontend Framework
Average in #Frontend Framework
Top functions reviewed by kandi - BETA

kandi has reviewed devdocs and discovered the below as its top functions. This is intended to give you an instant insight into devdocs implemented functionality, and help decide if they suit your requirements.

  • Runs the image .
    • Build a page
      • Takes an absolute path and returns the url that matches the url .
        • Builds the navigation links
          • Sort the number of strings
            • Fetch the url for the user
              • Returns a hash of options .
                • Returns the relative path relative to the destination directory
                  • Normalize a path .
                    • Return the diff of the given object

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      devdocs Key Features

                      API Documentation Browser

                      Quick Start

                      copy iconCopydownload iconDownload
                      git clone https://github.com/freeCodeCamp/devdocs.git && cd devdocs
                      gem install bundler
                      bundle install
                      bundle exec thor docs:download --default
                      bundle exec rackup
                      

                      Available Commands

                      copy iconCopydownload iconDownload
                      # Server
                      rackup              # Start the server (ctrl+c to stop)
                      rackup --help       # List server options
                      
                      # Docs
                      thor docs:list      # List available documentations
                      thor docs:download  # Download one or more documentations
                      thor docs:manifest  # Create the manifest file used by the app
                      thor docs:generate  # Generate/scrape a documentation
                      thor docs:page      # Generate/scrape a documentation page
                      thor docs:package   # Package a documentation for use with docs:download
                      thor docs:clean     # Delete documentation packages
                      
                      # Console
                      thor console        # Start a REPL
                      thor console:docs   # Start a REPL in the "Docs" module
                      
                      # Tests can be run quickly from within the console using the "test" command. 
                      # Run "help test" for usage instructions.
                      thor test:all       # Run all tests
                      thor test:docs      # Run "Docs" tests
                      thor test:app       # Run "App" tests
                      
                      # Assets
                      thor assets:compile # Compile assets (not required in development mode)
                      thor assets:clean   # Clean old assets
                      

                      Tensorflow rotate with random uniform take 1 positional argument but 2 were given

                      copy iconCopydownload iconDownload
                      import tensorflow as tf
                      
                      image = tf.random.normal((180, 180, 3))
                      rotated = tf.keras.preprocessing.image.random_rotation(image.numpy(), 
                              tf.random.uniform(shape=(),minval=40, maxval=90).numpy(),channel_axis=2)
                      
                      import tensorflow as tf
                      import os
                      import matplotlib.pyplot as plt
                      
                      _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
                      path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
                      PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
                      
                      train_dir = os.path.join(PATH, 'train')
                      validation_dir = os.path.join(PATH, 'validation')
                      
                      BATCH_SIZE = 1
                      IMG_SIZE = (160, 160)
                      
                      train_ds = tf.keras.utils.image_dataset_from_directory(train_dir,
                                                                                  shuffle=True,
                                                                                  batch_size=BATCH_SIZE,
                                                                                  image_size=IMG_SIZE)
                      
                      data_augmentation = tf.keras.Sequential([
                        tf.keras.layers.RandomRotation(tf.random.uniform(shape=(),minval=40, maxval=90)),
                      ])
                      for image, _ in train_ds.take(1):
                        plt.figure(figsize=(10, 10))
                        first_image = image[0]
                        for i in range(9):
                          ax = plt.subplot(3, 3, i + 1)
                          augmented_image = data_augmentation(tf.expand_dims(first_image, 0), training=True)
                          plt.imshow(augmented_image[0] / 255)
                          plt.axis('off')
                      
                      import tensorflow as tf
                      
                      image = tf.random.normal((180, 180, 3))
                      rotated = tf.keras.preprocessing.image.random_rotation(image.numpy(), 
                              tf.random.uniform(shape=(),minval=40, maxval=90).numpy(),channel_axis=2)
                      
                      import tensorflow as tf
                      import os
                      import matplotlib.pyplot as plt
                      
                      _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
                      path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
                      PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
                      
                      train_dir = os.path.join(PATH, 'train')
                      validation_dir = os.path.join(PATH, 'validation')
                      
                      BATCH_SIZE = 1
                      IMG_SIZE = (160, 160)
                      
                      train_ds = tf.keras.utils.image_dataset_from_directory(train_dir,
                                                                                  shuffle=True,
                                                                                  batch_size=BATCH_SIZE,
                                                                                  image_size=IMG_SIZE)
                      
                      data_augmentation = tf.keras.Sequential([
                        tf.keras.layers.RandomRotation(tf.random.uniform(shape=(),minval=40, maxval=90)),
                      ])
                      for image, _ in train_ds.take(1):
                        plt.figure(figsize=(10, 10))
                        first_image = image[0]
                        for i in range(9):
                          ax = plt.subplot(3, 3, i + 1)
                          augmented_image = data_augmentation(tf.expand_dims(first_image, 0), training=True)
                          plt.imshow(augmented_image[0] / 255)
                          plt.axis('off')
                      

                      Why I can't import torch windows

                      copy iconCopydownload iconDownload
                      import torch
                      

                      How can I resolve Python module import problems stemming from the failed import of NumPy C-extensions for running Spark/Python code on a MacBook Pro?

                      copy iconCopydownload iconDownload
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      
                      Do you wish the installer to initialize Miniforge3
                      by running conda init? [yes|no] >>> choose 'yes'
                      
                      If you'd prefer that conda's base environment not be activated on startup, 
                      set the auto_activate_base parameter to false: 
                      conda config --set auto_activate_base false  # Set to 'false' for now
                      
                      conda create -y -n pyspark_conda_env -c conda-forge numpy conda-pack
                      conda activate pyspark_conda_env
                      conda pack -f -o pyspark_conda_env.tar.gz
                      
                      (pyspark_conda_env) MacBook-Pro ~$ python --version
                      Python 3.10.2
                      (pyspark_conda_env) MacBook-Pro ~$ which python
                      /Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      
                      spark-submit --archives pyspark_conda_env.tar.gz test.py
                      
                      export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
                      export SPARK_HOME=/Users/.../Spark2/spark-3.2.1-bin-hadoop2.7
                      export SBT_HOME=/Users/.../Spark2/sbt
                      export SCALA_HOME=/Users/.../Spark2/scala-2.12.15
                      export PATH=$JAVA_HOME/bin:$SBT_HOME/bin:$SBT_HOME/lib:$SCALA_HOME/bin:$SCALA_HOME/lib:$PATH
                      export PATH=$JAVA_HOME/bin:$SPARK_HOME:$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
                      export PYSPARK_PYTHON=/Users/.../miniforge3/envs/pyspark_conda_env/bin/python
                      PYTHONPATH=$SPARK_HOME$\python:$SPARK_HOME$\python\lib\py4j-0.10.9.3-src.zip:$PYTHONPATH
                      PATH="/Library/Frameworks/Python.framework/Versions/3.10/bin:${PATH}"
                      export PATH
                      
                      # export PYSPARK_DRIVER_PYTHON="jupyter"        # Not required
                      # export PYSPARK_DRIVER_PYTHON_OPTS="notebook"  # Not required
                      
                      from pyspark.sql import SparkSession
                      from pyspark.sql.types import *
                      from pyspark.sql.functions import *
                      (etc.)
                      
                      if __name__ == "__main__":
                          main(SparkSession.builder.getOrCreate())
                      
                      pip3 install -t dependencies -r requirements.txt
                      zip -r dep.zip dependencies # Possibly incorrect...
                      zip -r dep.zip .            # Correct if run from within folder containing requirements.txt 
                      spark-submit --py-files dep.zip test.py
                      

                      Unable to import Pandas on Replit.com - Python

                      copy iconCopydownload iconDownload
                      { pkgs }: {
                          deps = [
                              pkgs.bashInteractive
                              (pkgs.python38.withPackages (p: [p.pandas]))
                          ];
                      }
                      

                      Numpy from alpine package repo fails to import c-extensions

                      copy iconCopydownload iconDownload
                      RUN find /usr/lib/python3.9/site-packages -iname "*.so" -exec sh -c 'x="{}"; mv "$x" "${x/cpython-39-x86_64-linux-musl./}"' \;
                      

                      What is a "closure" in Julia?

                      copy iconCopydownload iconDownload
                      julia> function est_mean(x)
                                 function fun(m)
                                     return m - mean(x)
                                 end
                                 val = find_zero(fun, 0.0)
                                 @show val, mean(x)
                                 return fun # explicitly return the inner function to inspect it
                             end
                      est_mean (generic function with 1 method)
                      
                      julia> x = rand(10)
                      10-element Vector{Float64}:
                       0.6699650145575134
                       0.8208379672036165
                       0.4299946498764684
                       0.1321653923513042
                       0.5552854476018734
                       0.8729613266067378
                       0.5423030870674236
                       0.15751882823315777
                       0.4227087678654101
                       0.8594042895489912
                      
                      julia> fun = est_mean(x)
                      (val, mean(x)) = (0.5463144770912497, 0.5463144770912497)
                      fun (generic function with 1 method)
                      
                      julia> dump(fun)
                      fun (function of type var"#fun#3"{Vector{Float64}})
                        x: Array{Float64}((10,)) [0.6699650145575134, 0.8208379672036165, 0.4299946498764684, 0.1321653923513042, 0.5552854476018734, 0.8729613266067378, 0.5423030870674236, 0.15751882823315777, 0.4227087678654101, 0.8594042895489912]
                      
                      julia> fun.x
                      10-element Vector{Float64}:
                       0.6699650145575134
                       0.8208379672036165
                       0.4299946498764684
                       0.1321653923513042
                       0.5552854476018734
                       0.8729613266067378
                       0.5423030870674236
                       0.15751882823315777
                       0.4227087678654101
                       0.8594042895489912
                      
                      julia> fun(10)
                      9.453685522908751
                      
                      julia> function gen()
                                x = []
                                return v -> push!(x, v)
                             end
                      gen (generic function with 1 method)
                      
                      julia> fun2 = gen()
                      #4 (generic function with 1 method)
                      
                      julia> fun2.x
                      Any[]
                      
                      julia> fun2(1)
                      1-element Vector{Any}:
                       1
                      
                      julia> fun2.x
                      1-element Vector{Any}:
                       1
                      
                      julia> fun2(100)
                      2-element Vector{Any}:
                         1
                       100
                      
                      julia> fun2.x
                      2-element Vector{Any}:
                         1
                       100
                      
                      julia> function est_mean(x)
                                 function fun(m)
                                     return m - mean(x)
                                 end
                                 val = find_zero(fun, 0.0)
                                 @show val, mean(x)
                                 return fun # explicitly return the inner function to inspect it
                             end
                      est_mean (generic function with 1 method)
                      
                      julia> x = rand(10)
                      10-element Vector{Float64}:
                       0.6699650145575134
                       0.8208379672036165
                       0.4299946498764684
                       0.1321653923513042
                       0.5552854476018734
                       0.8729613266067378
                       0.5423030870674236
                       0.15751882823315777
                       0.4227087678654101
                       0.8594042895489912
                      
                      julia> fun = est_mean(x)
                      (val, mean(x)) = (0.5463144770912497, 0.5463144770912497)
                      fun (generic function with 1 method)
                      
                      julia> dump(fun)
                      fun (function of type var"#fun#3"{Vector{Float64}})
                        x: Array{Float64}((10,)) [0.6699650145575134, 0.8208379672036165, 0.4299946498764684, 0.1321653923513042, 0.5552854476018734, 0.8729613266067378, 0.5423030870674236, 0.15751882823315777, 0.4227087678654101, 0.8594042895489912]
                      
                      julia> fun.x
                      10-element Vector{Float64}:
                       0.6699650145575134
                       0.8208379672036165
                       0.4299946498764684
                       0.1321653923513042
                       0.5552854476018734
                       0.8729613266067378
                       0.5423030870674236
                       0.15751882823315777
                       0.4227087678654101
                       0.8594042895489912
                      
                      julia> fun(10)
                      9.453685522908751
                      
                      julia> function gen()
                                x = []
                                return v -> push!(x, v)
                             end
                      gen (generic function with 1 method)
                      
                      julia> fun2 = gen()
                      #4 (generic function with 1 method)
                      
                      julia> fun2.x
                      Any[]
                      
                      julia> fun2(1)
                      1-element Vector{Any}:
                       1
                      
                      julia> fun2.x
                      1-element Vector{Any}:
                       1
                      
                      julia> fun2(100)
                      2-element Vector{Any}:
                         1
                       100
                      
                      julia> fun2.x
                      2-element Vector{Any}:
                         1
                       100
                      
                      struct LikelihoodClosure
                          X
                          y
                      end
                      
                      (l::LikelihoodClosure)(β) = -log_likelihood(l.X, l.y, β)
                      make_closures(X, y) = LikelihoodClosure(X, y)
                      nll = make_closures(X, y)
                      
                      julia> f(x) = y -> x + y
                      f (generic function with 1 method)
                      
                      julia> f(1) # that's the closure value
                      #1 (generic function with 1 method)
                      
                      julia> typeof(f(1)) # that's the closure type
                      var"#1#2"{Int64}
                      
                      julia> f(1).x
                      1
                      
                      julia> propertynames(f(1)) # behold, it has a field `x`!
                      (:x,)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))
                      #1 (generic function with 1 method)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))(2)
                      24
                      
                      struct LikelihoodClosure
                          X
                          y
                      end
                      
                      (l::LikelihoodClosure)(β) = -log_likelihood(l.X, l.y, β)
                      make_closures(X, y) = LikelihoodClosure(X, y)
                      nll = make_closures(X, y)
                      
                      julia> f(x) = y -> x + y
                      f (generic function with 1 method)
                      
                      julia> f(1) # that's the closure value
                      #1 (generic function with 1 method)
                      
                      julia> typeof(f(1)) # that's the closure type
                      var"#1#2"{Int64}
                      
                      julia> f(1).x
                      1
                      
                      julia> propertynames(f(1)) # behold, it has a field `x`!
                      (:x,)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))
                      #1 (generic function with 1 method)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))(2)
                      24
                      
                      struct LikelihoodClosure
                          X
                          y
                      end
                      
                      (l::LikelihoodClosure)(β) = -log_likelihood(l.X, l.y, β)
                      make_closures(X, y) = LikelihoodClosure(X, y)
                      nll = make_closures(X, y)
                      
                      julia> f(x) = y -> x + y
                      f (generic function with 1 method)
                      
                      julia> f(1) # that's the closure value
                      #1 (generic function with 1 method)
                      
                      julia> typeof(f(1)) # that's the closure type
                      var"#1#2"{Int64}
                      
                      julia> f(1).x
                      1
                      
                      julia> propertynames(f(1)) # behold, it has a field `x`!
                      (:x,)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))
                      #1 (generic function with 1 method)
                      
                      julia> eval(Expr(:new, var"#1#2"{Int64}, 22))(2)
                      24
                      

                      How does the methods function work in Julia?

                      copy iconCopydownload iconDownload
                      julia> f(a=1, b=1, c=1, d=1) = a + 2b +3c +4d
                      f (generic function with 8 methods)
                      
                      julia> f(2,4)
                      

                      Import c-modules from embedded Python interpreter (pybind11) in a shared object raises an undefined symbol exception

                      copy iconCopydownload iconDownload
                      // main.cc
                      #include "pybind11/embed.h"
                      #include <dlfcn.h>
                      namespace py = pybind11;
                      
                      extern "C" {
                      void * python;
                      
                      int create() {
                        python = dlopen("/usr/lib/x86_64-linux-gnu/libpython3.8.so", RTLD_NOW | RTLD_GLOBAL);
                        return 0;
                      }
                      
                      int destroy() {
                        dlclose(python);
                        return 0;
                      }
                      
                      int main() {
                        py::scoped_interpreter guard{};
                        auto py_module = py::module::import("numpy");
                        auto version   = py_module.attr("__version__");
                        py::print(version);
                        return 0;
                      }
                      }
                      
                      // load.cc
                      #include <dlfcn.h>
                      
                      int main() {
                        void * lib = dlopen("./libissue.so", RTLD_NOW | RTLD_DEEPBIND);
                        int(*fnc)(void) = (int(*)(void))dlsym(lib, "main");
                        int(*create)(void) = (int(*)(void))dlsym(lib, "create");
                        int(*destroy)(void) = (int(*)(void))dlsym(lib, "destroy");
                        create();
                        fnc();
                        destroy();
                        dlclose(lib);
                        return 0;
                      }
                      
                      // main.cc
                      #include "pybind11/embed.h"
                      #include <dlfcn.h>
                      namespace py = pybind11;
                      
                      extern "C" {
                      void * python;
                      
                      int create() {
                        python = dlopen("/usr/lib/x86_64-linux-gnu/libpython3.8.so", RTLD_NOW | RTLD_GLOBAL);
                        return 0;
                      }
                      
                      int destroy() {
                        dlclose(python);
                        return 0;
                      }
                      
                      int main() {
                        py::scoped_interpreter guard{};
                        auto py_module = py::module::import("numpy");
                        auto version   = py_module.attr("__version__");
                        py::print(version);
                        return 0;
                      }
                      }
                      
                      // load.cc
                      #include <dlfcn.h>
                      
                      int main() {
                        void * lib = dlopen("./libissue.so", RTLD_NOW | RTLD_DEEPBIND);
                        int(*fnc)(void) = (int(*)(void))dlsym(lib, "main");
                        int(*create)(void) = (int(*)(void))dlsym(lib, "create");
                        int(*destroy)(void) = (int(*)(void))dlsym(lib, "destroy");
                        create();
                        fnc();
                        destroy();
                        dlclose(lib);
                        return 0;
                      }
                      

                      ModuleNotFoundError: No module named 'pandas' | `pip install pandas` &amp; `poetry add pandas` fail

                      copy iconCopydownload iconDownload
                      pip uninstall pandas
                      
                      pip install pandas
                      or
                      pip3 install pandas
                      
                      import pandas as pd
                      pd.__version__
                      
                      pip uninstall pandas
                      
                      pip install pandas
                      or
                      pip3 install pandas
                      
                      import pandas as pd
                      pd.__version__
                      
                      pip uninstall pandas
                      
                      pip install pandas
                      or
                      pip3 install pandas
                      
                      import pandas as pd
                      pd.__version__
                      
                      python get-poetry.py --uninstall
                      POETRY_UNINSTALL=1 python get-poetry.py
                      
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
                      Retrieving Poetry metadata
                      
                      This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later.
                      # Welcome to Poetry!
                      
                      This will download and install the latest version of Poetry,
                      a dependency and package manager for Python.
                      
                      It will add the `poetry` command to Poetry's bin directory, located at:
                      
                      $HOME/.poetry/bin
                      
                      This path will then be added to your `PATH` environment variable by
                      modifying the profile file located at:
                      
                      $HOME/.profile
                      
                      You can uninstall at any time by executing this script with the --uninstall option,
                      and these changes will be reverted.
                      
                      Installing version: 1.1.11
                        - Downloading poetry-1.1.11-linux.tar.gz (64.48MB)
                      
                      Poetry (1.1.11) is installed now. Great!
                      
                      To get started you need Poetry's bin directory ($HOME/.poetry/bin) in your `PATH`
                      environment variable. Next time you log in this will be done
                      automatically.
                      
                      To configure your current shell run `source $HOME/.poetry/env`
                      
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry install
                      bash: /home/me/.local/bin/poetry: No such file or directory
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry run python3 cli.py
                      bash: /home/me/.local/bin/poetry: No such file or directory
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ source $HOME/.poetry/env
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry install
                      Installing dependencies from lock file
                      
                      Package operations: 82 installs, 0 updates, 0 removals
                      
                      ...
                        • Installing numpy (1.19.5)
                      ...
                        • Installing pandas (1.3.1)
                      ...
                      
                      python get-poetry.py --uninstall
                      POETRY_UNINSTALL=1 python get-poetry.py
                      
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
                      Retrieving Poetry metadata
                      
                      This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later.
                      # Welcome to Poetry!
                      
                      This will download and install the latest version of Poetry,
                      a dependency and package manager for Python.
                      
                      It will add the `poetry` command to Poetry's bin directory, located at:
                      
                      $HOME/.poetry/bin
                      
                      This path will then be added to your `PATH` environment variable by
                      modifying the profile file located at:
                      
                      $HOME/.profile
                      
                      You can uninstall at any time by executing this script with the --uninstall option,
                      and these changes will be reverted.
                      
                      Installing version: 1.1.11
                        - Downloading poetry-1.1.11-linux.tar.gz (64.48MB)
                      
                      Poetry (1.1.11) is installed now. Great!
                      
                      To get started you need Poetry's bin directory ($HOME/.poetry/bin) in your `PATH`
                      environment variable. Next time you log in this will be done
                      automatically.
                      
                      To configure your current shell run `source $HOME/.poetry/env`
                      
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry install
                      bash: /home/me/.local/bin/poetry: No such file or directory
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry run python3 cli.py
                      bash: /home/me/.local/bin/poetry: No such file or directory
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ source $HOME/.poetry/env
                      me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator$ poetry install
                      Installing dependencies from lock file
                      
                      Package operations: 82 installs, 0 updates, 0 removals
                      
                      ...
                        • Installing numpy (1.19.5)
                      ...
                        • Installing pandas (1.3.1)
                      ...
                      

                      Matplotlib: is it possible to do a stepwise stacked plot?

                      copy iconCopydownload iconDownload
                      import matplotlib.pyplot as plt
                      import numpy as np
                      
                      x = np.arange(1, 6)
                      y1 = np.random.rand(5) + 1
                      y2 = np.random.rand(5) + 2
                      y3 = np.random.rand(5) + 3
                      
                      plt.stackplot(x, y1, y2, y3, step='post', labels=['A', 'B', 'C'])
                      plt.xticks(x)
                      plt.legend()
                      plt.show()
                      

                      Community Discussions

                      Trending Discussions on devdocs
                      • Tensorflow rotate with random uniform take 1 positional argument but 2 were given
                      • Azure DevOps AzureFunctionApp@1 not installing python dependencies
                      • Why I can't import torch windows
                      • How can I resolve Python module import problems stemming from the failed import of NumPy C-extensions for running Spark/Python code on a MacBook Pro?
                      • NumPy setup / import issue in lambda function
                      • Unable to import Pandas on Replit.com - Python
                      • Numpy from alpine package repo fails to import c-extensions
                      • What is a &quot;closure&quot; in Julia?
                      • How does the methods function work in Julia?
                      • Import c-modules from embedded Python interpreter (pybind11) in a shared object raises an undefined symbol exception
                      Trending Discussions on devdocs

                      QUESTION

                      Tensorflow rotate with random uniform take 1 positional argument but 2 were given

                      Asked 2022-Apr-04 at 07:24

                      I have the following code that uses tensorflow to calculate a custom average loss when the image is consistently rotated:

                      import tensorflow as tf
                      import cv2
                      
                      #initialize x_hat    
                      img = cv2.imread("4.jpg")
                      x_hat = tf.Variable(img,name = 'x_hat') #img we want to attack 
                      
                      @tf.function 
                      def cost2():
                          image=x_hat
                          #Now it will generate 100 samples rotated
                          num_samples = 100
                          average_loss = 0
                      
                          for j in range(num_samples):
                      
                              #ADD ROTATION (there may be a problem here)
                              rotated = tf.keras.preprocessing.image.random_rotation(image, 
                              tf.random.uniform(shape=(),minval=40, maxval=90),channel_axis=2)
                      
                              #get logits
                              rotated_logits, _ = resnet(rotated)
                              #get average CUSTOM loss
                              average_loss+=-1 * tf.nn.softmax_cross_entropy_with_logits(logits=rotated_logits, labels=labels)/ num_samples
                      return average_loss
                      

                      and here is how I call it

                      learning_rate = 1e-1
                      optim = tf.optimizers.SGD (learning_rate=learning_rate)
                      
                      epsilon = 2.0/255.0 # a really small perturbation
                      below = x - epsilon
                      above = x + epsilon
                      
                      demo_steps = 200
                      
                      
                      # projected gradient descent
                      for i in range(demo_steps):
                      
                          loss = optim.minimize(cost2, var_list=[x_hat])
                      
                          if (i+1) % 10 == 0:
                              print('step %d, loss=%g' % (i+1, loss.numpy()))
                      
                          projected = tf.clip_by_value(tf.clip_by_value(x_hat, below, above), 0, 1)
                      
                          with tf.control_dependencies([projected]):
                              x_hat.assign(projected)
                      
                      adv_robust = x_hat.numpy() 
                      

                      However, the following error returns to me once I run the code:

                      TypeError: in user code:
                      
                      <ipython-input-183-abde02909da7>:14 cost2  *
                          rotated = tf.keras.preprocessing.image.random_rotation(image, 
                      tf.random.uniform(shape=(),minval=40, maxval=90),channel_axis=2)
                      /home/me/.local/lib/python3.8/site- 
                      packages/keras_preprocessing/image/affine_transformations.py:55 random_rotation  *
                      theta = np.random.uniform(-rg, rg)
                      mtrand.pyx:1111 numpy.random.mtrand.RandomState.uniform  **
                          
                      
                      TypeError: __array__() takes 1 positional argument but 2 were given
                      

                      I am on Tensorflow 2.4.0 and the random_rotation and random.uniform functions are correct according to the TF 2.4.0 documentation HERE and HERE. So, what am I missing here?

                      ANSWER

                      Answered 2022-Apr-01 at 08:58

                      The error might be coming from using TF tensors. As stated in the docs you linked regarding random_rotation:

                      Performs a random rotation of a Numpy image tensor.

                      Meaning you cannot use TF tensors with this operation. If you are in eager execution mode you can use tensor.numpy():

                      import tensorflow as tf
                      
                      image = tf.random.normal((180, 180, 3))
                      rotated = tf.keras.preprocessing.image.random_rotation(image.numpy(), 
                              tf.random.uniform(shape=(),minval=40, maxval=90).numpy(),channel_axis=2)
                      

                      Otherwise, it is recommended to use the preprocessing layer: tf.keras.layers.RandomRotation, since using numpy in graph mode (for example in a function decorated with @tf.function) is not recommended.

                      Here is an example using the tf.keras.layers.RandomRotation:

                      import tensorflow as tf
                      import os
                      import matplotlib.pyplot as plt
                      
                      _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
                      path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
                      PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
                      
                      train_dir = os.path.join(PATH, 'train')
                      validation_dir = os.path.join(PATH, 'validation')
                      
                      BATCH_SIZE = 1
                      IMG_SIZE = (160, 160)
                      
                      train_ds = tf.keras.utils.image_dataset_from_directory(train_dir,
                                                                                  shuffle=True,
                                                                                  batch_size=BATCH_SIZE,
                                                                                  image_size=IMG_SIZE)
                      
                      data_augmentation = tf.keras.Sequential([
                        tf.keras.layers.RandomRotation(tf.random.uniform(shape=(),minval=40, maxval=90)),
                      ])
                      for image, _ in train_ds.take(1):
                        plt.figure(figsize=(10, 10))
                        first_image = image[0]
                        for i in range(9):
                          ax = plt.subplot(3, 3, i + 1)
                          augmented_image = data_augmentation(tf.expand_dims(first_image, 0), training=True)
                          plt.imshow(augmented_image[0] / 255)
                          plt.axis('off')
                      

                      Source https://stackoverflow.com/questions/71703875

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install devdocs

                      Unless you wish to contribute to the project, we recommend using the hosted version at devdocs.io. It's up-to-date and works offline out-of-the-box. DevDocs is made of two pieces: a Ruby scraper that generates the documentation and metadata, and a JavaScript app powered by a small Sinatra app.

                      Support

                      Contributions are welcome. Please read the contributing guidelines.

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Share this Page

                      share link
                      Consider Popular Frontend Framework Libraries
                      Try Top Libraries by freeCodeCamp
                      Compare Frontend Framework Libraries with Highest Support
                      Compare Frontend Framework Libraries with Highest Quality
                      Compare Frontend Framework Libraries with Highest Security
                      Compare Frontend Framework Libraries with Permissive License
                      Compare Frontend Framework Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.