scipy | SciPy library main repository

 by   scipy Python Version: 1.10.1 License: BSD-3-Clause

kandi X-RAY | scipy Summary

scipy is a Python library. scipy has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install scipy' or download it from GitHub, PyPI.
SciPy library main repository
    Support
      Quality
        Security
          License
            Reuse
            Support
              Quality
                Security
                  License
                    Reuse

                      kandi-support Support

                        summary
                        scipy has a highly active ecosystem.
                        summary
                        It has 10948 star(s) with 4634 fork(s). There are 345 watchers for this library.
                        summary
                        There were 6 major release(s) in the last 6 months.
                        summary
                        There are 1388 open issues and 7548 have been closed. On average issues are closed in 60 days. There are 295 open pull requests and 0 closed requests.
                        summary
                        It has a positive sentiment in the developer community.
                        summary
                        The latest version of scipy is 1.10.1
                        scipy Support
                          Best in #Python
                            Average in #Python
                            scipy Support
                              Best in #Python
                                Average in #Python

                                  kandi-Quality Quality

                                    summary
                                    scipy has 0 bugs and 0 code smells.
                                    scipy Quality
                                      Best in #Python
                                        Average in #Python
                                        scipy Quality
                                          Best in #Python
                                            Average in #Python

                                              kandi-Security Security

                                                summary
                                                scipy has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                summary
                                                scipy code analysis shows 0 unresolved vulnerabilities.
                                                summary
                                                There are 0 security hotspots that need review.
                                                scipy Security
                                                  Best in #Python
                                                    Average in #Python
                                                    scipy Security
                                                      Best in #Python
                                                        Average in #Python

                                                          kandi-License License

                                                            summary
                                                            scipy is licensed under the BSD-3-Clause License. This license is Permissive.
                                                            summary
                                                            Permissive licenses have the least restrictions, and you can use them in most projects.
                                                            scipy License
                                                              Best in #Python
                                                                Average in #Python
                                                                scipy License
                                                                  Best in #Python
                                                                    Average in #Python

                                                                      kandi-Reuse Reuse

                                                                        summary
                                                                        scipy releases are available to install and integrate.
                                                                        summary
                                                                        Deployable package is available in PyPI.
                                                                        summary
                                                                        Build file is available. You can build the component from source.
                                                                        summary
                                                                        scipy saves you 191325 person hours of effort in developing the same functionality from scratch.
                                                                        summary
                                                                        It has 216613 lines of code, 17977 functions and 957 files.
                                                                        summary
                                                                        It has high code complexity. Code complexity directly impacts maintainability of the code.
                                                                        scipy Reuse
                                                                          Best in #Python
                                                                            Average in #Python
                                                                            scipy Reuse
                                                                              Best in #Python
                                                                                Average in #Python
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi has reviewed scipy and discovered the below as its top functions. This is intended to give you an instant insight into scipy implemented functionality, and help decide if they suit your requirements.
                                                                                  • Calculate least squares .
                                                                                    • Linear Grammar algorithm .
                                                                                      • r Solve an IVP .
                                                                                        • r Solve a problem .
                                                                                          • Compute the lsqr of A and b .
                                                                                            • Integrate quadratic integrand .
                                                                                              • Test a permutation test .
                                                                                                • r Solve the linear operator .
                                                                                                  • r Solve a binary quadratic problem .
                                                                                                    • Solve linear problem .
                                                                                                      Get all kandi verified functions for this library.
                                                                                                      Get all kandi verified functions for this library.

                                                                                                      scipy Key Features

                                                                                                      SciPy library main repository

                                                                                                      scipy Examples and Code Snippets

                                                                                                      NumPy Distutils - Users Guide-SciPy pure Python package example
                                                                                                      Pythondot imgLines of Code : 0dot imgLicense : Permissive (BSD-3-Clause)
                                                                                                      copy iconCopy
                                                                                                      
                                                                                                                                          if __name__ == "__main__": from numpy.distutils.core import setup #setup(**configuration(top_path='').todict()) setup(configuration=configuration)
                                                                                                      0
                                                                                                      Convert value to a scipy tensor .
                                                                                                      pythondot imgLines of Code : 34dot imgLicense : Non-SPDX (Apache License 2.0)
                                                                                                      copy iconCopy
                                                                                                      
                                                                                                                                          def _convert_scipy_sparse_tensor(value, expected_input): """Handle scipy sparse tensor conversions. This method takes a value 'value' and returns the proper conversion. If value is a scipy sparse tensor and the expected input is a dense tensor, we densify 'value'. If value is a scipy sparse tensor and the expected input is a TF SparseTensor, we convert 'value' to a SparseTensor. If 'value' is not a scipy sparse tensor, or scipy is not imported, we pass it through unchanged. Args: value: An object that may be a scipy sparse tensor expected_input: The expected input placeholder. Returns: The possibly-converted 'value'. """ if issparse is not None and issparse(value): if backend.is_sparse(expected_input): sparse_coo = value.tocoo() row, col = sparse_coo.row, sparse_coo.col data, shape = sparse_coo.data, sparse_coo.shape indices = np.concatenate((np.expand_dims(row, 1), np.expand_dims(col, 1)), 1) return sparse_tensor.SparseTensor(indices, data, shape) else: if ops.executing_eagerly_outside_functions(): # In TF2 we do not silently densify sparse matrices. raise ValueError('A SciPy sparse matrix was passed to a model ' 'that expects dense inputs. Please densify your ' 'inputs first, such as by calling `x.toarray().') return value.toarray() else: return value
                                                                                                      Returns a scipy name scope .
                                                                                                      pythondot imgLines of Code : 25dot imgLicense : Non-SPDX (Apache License 2.0)
                                                                                                      copy iconCopy
                                                                                                      
                                                                                                                                          def name_scope(name): """A context manager for use when defining a Python op. This context manager pushes a name scope, which will make the name of all operations added within it have a prefix. For example, to define a new Python op called `my_op`: def my_op(a): with tf.name_scope("MyOp") as scope: a = tf.convert_to_tensor(a, name="a") # Define some computation that uses `a`. return foo_op(..., name=scope) When executed, the Tensor `a` will have the name `MyOp/a`. Args: name: The prefix to use on all names created within the name scope. Returns: Name scope context manager. """ return ops.name_scope_v2(name)
                                                                                                      Convert a scipy to a SparseTensor .
                                                                                                      pythondot imgLines of Code : 10dot imgLicense : Non-SPDX (Apache License 2.0)
                                                                                                      copy iconCopy
                                                                                                      
                                                                                                                                          def _scipy_sparse_to_sparse_tensor(t): """Converts a SciPy sparse matrix to a SparseTensor.""" sparse_coo = t.tocoo() row, col = sparse_coo.row, sparse_coo.col data, shape = sparse_coo.data, sparse_coo.shape if issubclass(data.dtype.type, np.floating): data = data.astype(backend.floatx()) indices = np.concatenate( (np.expand_dims(row, axis=1), np.expand_dims(col, axis=1)), axis=1) return sparse_tensor.SparseTensor(indices, data, shape)
                                                                                                      Adding a non-zero scalar to sparse matrix
                                                                                                      Pythondot imgLines of Code : 35dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      In [166]: from scipy import sparse
                                                                                                      In [167]: M = sparse.random(5,5,.2,'csc')
                                                                                                      In [168]: M
                                                                                                      Out[168]: 
                                                                                                      <5x5 sparse matrix of type ''
                                                                                                          with 5 stored elements in Compressed Sparse Column format>
                                                                                                      In [169]: M.A
                                                                                                      Out[169]: 
                                                                                                      array([[0.24975586, 0.        , 0.        , 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 0.6863175 , 0.        ],
                                                                                                             [0.43488131, 0.19245474, 0.26190903, 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 0.        , 0.        ]])
                                                                                                      
                                                                                                      
                                                                                                      In [171]: x=np.random.laplace(0,10)
                                                                                                      In [172]: x
                                                                                                      Out[172]: 0.4773577605565098
                                                                                                      In [173]: M+x
                                                                                                      Traceback (most recent call last):
                                                                                                        Input In [173] in 
                                                                                                          M+x
                                                                                                        File /usr/local/lib/python3.8/dist-packages/scipy/sparse/_base.py:464 in __add__
                                                                                                          raise NotImplementedError('adding a nonzero scalar to a '
                                                                                                      NotImplementedError: adding a nonzero scalar to a sparse matrix is not supported
                                                                                                      
                                                                                                      In [174]: M.data += x
                                                                                                      In [175]: M.A
                                                                                                      Out[175]: 
                                                                                                      array([[0.72711362, 0.        , 0.        , 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 1.16367526, 0.        ],
                                                                                                             [0.91223907, 0.6698125 , 0.73926679, 0.        , 0.        ],
                                                                                                             [0.        , 0.        , 0.        , 0.        , 0.        ]])
                                                                                                      
                                                                                                      Scipy optimize to target
                                                                                                      Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      res = scipy.optimize.minimize_scalar(lambda x: goal_seek_func(x)**2) 
                                                                                                      
                                                                                                      Altair: Creating a layered violin + stripplot
                                                                                                      Pythondot imgLines of Code : 67dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      import altair as alt
                                                                                                      from vega_datasets import data
                                                                                                      
                                                                                                      df = data.cars()
                                                                                                      
                                                                                                      # 1. Create violin plot
                                                                                                      violin = alt.Chart(df).transform_density(
                                                                                                          "Horsepower",
                                                                                                          as_=["Horsepower", "density"],
                                                                                                      ).mark_area().encode(
                                                                                                          x="Horsepower:Q",
                                                                                                          y=alt.Y("density:Q", stack="center", title=None),
                                                                                                      )
                                                                                                      
                                                                                                      # 2. Create stripplot
                                                                                                      stripplot = alt.Chart(df).mark_circle(size=8, color="black").encode(
                                                                                                          x="Horsepower",
                                                                                                          y=alt.X("jitter:Q", title=None),
                                                                                                      ).transform_calculate(
                                                                                                          jitter="(random() / 400) + 0.0052"  # Narrowing and centering the points
                                                                                                      )
                                                                                                      
                                                                                                      # 3. Combine both
                                                                                                      violin + stripplot
                                                                                                      
                                                                                                      import altair as alt
                                                                                                      import numpy as np
                                                                                                      import pandas as pd
                                                                                                      from scipy import stats
                                                                                                      from vega_datasets import data
                                                                                                      
                                                                                                      
                                                                                                      # NAs are not supported in SciPy's density calculation
                                                                                                      df = data.cars().dropna()
                                                                                                      y = 'Horsepower'
                                                                                                      
                                                                                                      # Compute the density function of the data
                                                                                                      dens = stats.gaussian_kde(df[y])
                                                                                                      # Compute the density value for each data point
                                                                                                      pdf = dens(df[y].sort_values())
                                                                                                      
                                                                                                      # Randomly jitter points within 0 and the upper bond of the probability density function
                                                                                                      density_cloud = np.empty(pdf.shape[0])
                                                                                                      for i in range(pdf.shape[0]):
                                                                                                          density_cloud[i] = np.random.uniform(0, pdf[i])
                                                                                                      # To create a symmetric density/violin, we make every second point negative
                                                                                                      # Distributing every other point like this is also more likely to preserve the shape of the violin
                                                                                                      violin_cloud = density_cloud.copy()
                                                                                                      violin_cloud[::2] = violin_cloud[::2] * -1
                                                                                                      
                                                                                                      # Append the density cloud to the original data in the correctly sorted order
                                                                                                      df_with_density = pd.concat([
                                                                                                          df,
                                                                                                          pd.DataFrame({
                                                                                                              'density_cloud': density_cloud,
                                                                                                              'violin_cloud': violin_cloud
                                                                                                              },
                                                                                                              index=df['Horsepower'].sort_values().index)],
                                                                                                          axis=1
                                                                                                      )
                                                                                                      
                                                                                                      # Visualize using the new Offset channel
                                                                                                      alt.Chart(df_with_density).mark_circle().encode(
                                                                                                          x='Horsepower',
                                                                                                          y='violin_cloud'
                                                                                                      )
                                                                                                      
                                                                                                      scipy.stats.norm for array of values with different accuracy in different method
                                                                                                      Pythondot imgLines of Code : 9dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      np.random.seed(1)
                                                                                                      x = np.random.rand(30, 2).astype(np.float128)
                                                                                                      np.random.seed(2)
                                                                                                      x_test = np.random.rand(5,2).astype(np.float128)
                                                                                                      
                                                                                                      print(gx[:,0] - gx0)
                                                                                                      
                                                                                                      [0. 0. 0. 0. 0.]
                                                                                                      
                                                                                                      SciPy: interpolate scattered data on 3D grid
                                                                                                      Pythondot imgLines of Code : 23dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      chunk_val = 2000000                  # it is arbitrary and must be choosed based on the system rams size 
                                                                                                      chunk = xi.shape[0] // chunk_val
                                                                                                      chunk_res = xi.shape[0] % chunk_val
                                                                                                      
                                                                                                      # by array
                                                                                                      di = np.array([])
                                                                                                      start = 0
                                                                                                      for i in range(chunk + 1):
                                                                                                          di = np.append(di, rbfi(xi[start:(i+1) * chunk_val], yi[start:(i+1) * chunk_val], zi[start:(i+1) * chunk_val]))
                                                                                                          start += chunk_val
                                                                                                          if i == chunk:
                                                                                                              di = np.append(di, rbfi(xi[start:xi.shape[0]], yi[start:xi.shape[0]], zi[start:xi.shape[0]]))
                                                                                                      
                                                                                                      # by list
                                                                                                      di = []
                                                                                                      start = 0
                                                                                                      for i in range(chunk + 1):
                                                                                                          di.append(rbfi(xi[start:(i+1) * chunk_val], yi[start:(i+1) * chunk_val], zi[start:(i+1) * chunk_val]))
                                                                                                          start += chunk_val
                                                                                                          if i == chunk:
                                                                                                              di.append(rbfi(xi[start:xi.shape[0]], yi[start:xi.shape[0]], zi[start:xi.shape[0]]))
                                                                                                      di = [item for sublist in di for item in sublist]
                                                                                                      
                                                                                                      Approximating the conditional expectation E(X|Y)
                                                                                                      Pythondot imgLines of Code : 17dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                                      copy iconCopy
                                                                                                      P(X) = probability of X being True = (# of True elements in X) / (# of elements in X)
                                                                                                      
                                                                                                      P(Y) = probability of Y being True = (# of True elements in Y) / (# of elements in Y)
                                                                                                      
                                                                                                      P(X and Y) = probability of both X and Y being True = P(X) * P(Y)
                                                                                                      
                                                                                                      P(X given Y) = P(X | Y) = probability of X being True given that Y is True = P(X and Y) / P(Y)
                                                                                                      
                                                                                                      X = np.array(X, dtype=int)
                                                                                                      Y = np.array(Y, dtype=int)
                                                                                                      
                                                                                                      p_X = len(np.where(X == 1)[0]) / len(X)
                                                                                                      p_Y = len(np.where(Y == 1)[0]) / len(Y)
                                                                                                      
                                                                                                      p_X_and_Y = p_X * p_Y
                                                                                                      p_X_given_Y = p_X_and_Y / p_Y
                                                                                                      
                                                                                                      Community Discussions

                                                                                                      Trending Discussions on scipy

                                                                                                      Padding scipy affine_transform output to show non-overlapping regions of transformed images
                                                                                                      chevron right
                                                                                                      Cannot import name '_centered' from 'scipy.signal.signaltools'
                                                                                                      chevron right
                                                                                                      Installing scipy and scikit-learn on apple m1
                                                                                                      chevron right
                                                                                                      Why should I use normalised units in numerical integration?
                                                                                                      chevron right
                                                                                                      How could I speed up my written python code: spheres contact detection (collision) using spatial searching
                                                                                                      chevron right
                                                                                                      Colab: (0) UNIMPLEMENTED: DNN library is not found
                                                                                                      chevron right
                                                                                                      Cannot find conda info. Please verify your conda installation on EMR
                                                                                                      chevron right
                                                                                                      Edge weight in networkx
                                                                                                      chevron right
                                                                                                      ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
                                                                                                      chevron right
                                                                                                      Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?
                                                                                                      chevron right

                                                                                                      QUESTION

                                                                                                      Padding scipy affine_transform output to show non-overlapping regions of transformed images
                                                                                                      Asked 2022-Mar-28 at 11:54

                                                                                                      I have source (src) image(s) I wish to align to a destination (dst) image using an Affine Transformation whilst retaining the full extent of both images during alignment (even the non-overlapping areas).

                                                                                                      I am already able to calculate the Affine Transformation rotation and offset matrix, which I feed to scipy.ndimage.interpolate.affine_transform to recover the dst-aligned src image.

                                                                                                      The problem is that, when the images are not fuly overlapping, the resultant image is cropped to only the common footprint of the two images. What I need is the full extent of both images, placed on the same pixel coordinate system. This question is almost a duplicate of this one - and the excellent answer and repository there provides this functionality for OpenCV transformations. I unfortunately need this for scipy's implementation.

                                                                                                      Much too late, after repeatedly hitting a brick wall trying to translate the above question's answer to scipy, I came across this issue and subsequently followed to this question. The latter question did give some insight into the wonderful world of scipy's affine transformation, but I have as yet been unable to crack my particular needs.

                                                                                                      The transformations from src to dst can have translations and rotation. I can get translations only working (an example is shown below) and I can get rotations only working (largely hacking around the below and taking inspiration from the use of the reshape argument in scipy.ndimage.interpolation.rotate). However, I am getting thoroughly lost combining the two. I have tried to calculate what should be the correct offset (see this question's answers again), but I can't get it working in all scenarios.

                                                                                                      Translation-only working example of padded affine transformation, which follows largely this repo, explained in this answer:

                                                                                                      from scipy.ndimage import rotate, affine_transform
                                                                                                      import numpy as np
                                                                                                      import matplotlib.pyplot as plt
                                                                                                      
                                                                                                      nblob = 50
                                                                                                      shape = (200, 100)
                                                                                                      buffered_shape = (300, 200)  # buffer for rotation and translation
                                                                                                      
                                                                                                      
                                                                                                      def affine_test(angle=0, translate=(0, 0)):
                                                                                                          np.random.seed(42)
                                                                                                          # Maxiumum translation allowed is half difference between shape and buffered_shape
                                                                                                      
                                                                                                          # Generate a buffered_shape-sized base image with random blobs
                                                                                                          base = np.zeros(buffered_shape, dtype=np.float32)
                                                                                                          random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
                                                                                                          i = random_locs[:nblob]
                                                                                                          j = random_locs[nblob:]
                                                                                                          for k, (_i, _j) in enumerate(zip(i, j)):
                                                                                                              # Use different values, just to make it easier to distinguish blobs
                                                                                                              base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
                                                                                                      
                                                                                                          # Impose a rotation and translation on source
                                                                                                          src = rotate(base, angle, reshape=False, order=1, mode="constant")
                                                                                                          bsc = (np.array(buffered_shape) / 2).astype(int)
                                                                                                          sc = (np.array(shape) / 2).astype(int)
                                                                                                          src = src[
                                                                                                              bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
                                                                                                              bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
                                                                                                          ]
                                                                                                          # Cut-out destination from the centre of the base image
                                                                                                          dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
                                                                                                      
                                                                                                          src_y, src_x = src.shape
                                                                                                      
                                                                                                          def get_matrix_offset(centre, angle, scale):
                                                                                                              """Follows OpenCV.getRotationMatrix2D"""
                                                                                                              angle = angle * np.pi / 180
                                                                                                              alpha = scale * np.cos(angle)
                                                                                                              beta = scale * np.sin(angle)
                                                                                                              return (
                                                                                                                  np.array([[alpha, beta], [-beta, alpha]]),
                                                                                                                  np.array(
                                                                                                                      [
                                                                                                                          (1 - alpha) * centre[0] - beta * centre[1],
                                                                                                                          beta * centre[0] + (1 - alpha) * centre[1],
                                                                                                                      ]
                                                                                                                  ),
                                                                                                              )
                                                                                                          # Obtain the rotation matrix and offset that describes the transformation
                                                                                                          # between src and dst
                                                                                                          matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
                                                                                                          offset = offset - translate
                                                                                                      
                                                                                                          # Determine the outer bounds of the new image
                                                                                                          lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
                                                                                                          transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
                                                                                                      
                                                                                                          # Find min and max bounds of the transformed image
                                                                                                          min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
                                                                                                          min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
                                                                                                          max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
                                                                                                          max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
                                                                                                      
                                                                                                          # Add translation to the transformation matrix to shift to positive values
                                                                                                          anchor_x, anchor_y = 0, 0
                                                                                                          if min_x < 0:
                                                                                                              anchor_x = -min_x
                                                                                                          if min_y < 0:
                                                                                                              anchor_y = -min_y
                                                                                                          shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
                                                                                                      
                                                                                                          # Create padded destination image
                                                                                                          dst_h, dst_w = dst.shape[:2]
                                                                                                          pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
                                                                                                          dst_padded = np.pad(
                                                                                                              dst,
                                                                                                              ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
                                                                                                              "constant",
                                                                                                              constant_values=-1,
                                                                                                          )
                                                                                                          dst_pad_h, dst_pad_w = dst_padded.shape
                                                                                                      
                                                                                                          # Create the aligned and padded source image
                                                                                                          source_aligned = affine_transform(
                                                                                                              src,
                                                                                                              matrix.T,
                                                                                                              offset=shifted_offset,
                                                                                                              output_shape=(dst_pad_h, dst_pad_w),
                                                                                                              order=3,
                                                                                                              mode="constant",
                                                                                                              cval=-1,
                                                                                                          )
                                                                                                      
                                                                                                          # Plot the images
                                                                                                          fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
                                                                                                          axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
                                                                                                          axes[0].set_title("Source")
                                                                                                          axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
                                                                                                          axes[1].set_title("Dest")
                                                                                                          axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
                                                                                                          axes[2].set_title("Source aligned to Dest padded")
                                                                                                          axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
                                                                                                          axes[3].set_title("Dest padded")
                                                                                                          plt.show()
                                                                                                      

                                                                                                      e.g.:

                                                                                                      affine_test(0, (-20, 40))
                                                                                                      

                                                                                                      gives:

                                                                                                      With a zoom in showing the aligned in the padded images:

                                                                                                      I require the full extent of the src and dst images aligned on the same pixel coordinates, with both rotations and translations.

                                                                                                      Any help is greatly appreciated!

                                                                                                      ANSWER

                                                                                                      Answered 2022-Mar-22 at 16:44

                                                                                                      If you have two images that are similar (or the same) and you want to align them, you can do it using both functions rotate and shift :

                                                                                                      from scipy.ndimage import rotate, shift
                                                                                                      

                                                                                                      You need to find first the difference of angle between the two images angle_to_rotate, having that you apply a rotation to src:

                                                                                                      angle_to_rotate = 25
                                                                                                      rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
                                                                                                      

                                                                                                      With reshape=True you avoid losing information from your original src matrix, and it pads the result so the image could be translated around the 0,0 indexes. You can calculate this translation as it is (x*cos(angle),y*sin(angle) where x and y are the dimensions of the image, but it probably won't matter.

                                                                                                      Now you will need to translate the image to the source, for doing that you can use the shift function:

                                                                                                      rot_translated_src = shift(rotated_src , [distance_x, distance_y])
                                                                                                      

                                                                                                      In this case there is no reshape (because otherwise you wouldn't have any real translation) so if the image was not previously padded some information will be lost.

                                                                                                      But you can do some padding with

                                                                                                      np.pad(src, number, mode='constant')
                                                                                                      

                                                                                                      To calculate distance_x and distance_y you will need to find a point that serves you as a reference between the rotated_src and the destination, then just calculate the distance in the x and y axis.

                                                                                                      Summary

                                                                                                      1. Make some padding in src, and dst
                                                                                                      2. Find the angular distance between them.
                                                                                                      3. Rotate src with scipy.ndimage.rotate using reshape=True
                                                                                                      4. Find the horizontal and vertical distance distance_x, distance_y between the rotated image and dst
                                                                                                      5. Translate your 'rotated_src' with scipy.ndimage.shift

                                                                                                      Code

                                                                                                      from scipy.ndimage import rotate, shift
                                                                                                      import matplotlib.pyplot as plt
                                                                                                      import numpy as np
                                                                                                      

                                                                                                      First we make the destination image:

                                                                                                      # make and plot dest
                                                                                                      dst = np.ones([40,20])
                                                                                                      dst = np.pad(dst,10)
                                                                                                      dst[17,[14,24]]=4
                                                                                                      dst[27,14:25]=4
                                                                                                      dst[26,[14,25]]=4
                                                                                                      rotated_dst = rotate(dst, 20, order=1)
                                                                                                      
                                                                                                      plt.imshow(dst) # plot it
                                                                                                      plt.imshow(rotated_dst)
                                                                                                      plt.show()
                                                                                                      

                                                                                                      We make the Source image:

                                                                                                      # make_src image and plot it
                                                                                                      src = np.zeros([40,20])
                                                                                                      src = np.pad(src,10)
                                                                                                      src[0:20,0:20]=1
                                                                                                      src[7,[4,14]]=4
                                                                                                      src[17,4:15]=4
                                                                                                      src[16,[4,15]]=4
                                                                                                      plt.imshow(src)
                                                                                                      plt.show()
                                                                                                      

                                                                                                      Then we align the src to the destination:

                                                                                                      rotated_src = rotate(src, 20, order=1) # find the angle 20, reshape true is by default
                                                                                                      plt.imshow(rotated_src)
                                                                                                      plt.show()
                                                                                                      distance_y = 8 # find this distances from rotated_src and dst
                                                                                                      distance_x = 12 # use any visual reference or even the corners
                                                                                                      translated_src = shift(rotated_src, [distance_y,distance_x])
                                                                                                      plt.imshow(translated_src)
                                                                                                      plt.show()
                                                                                                      

                                                                                                      pd: If you find problems to find the angle and the distances in a programmatic way, please leave a comment providing a bit more of insight of what can be used as a reference that could be for example the frame of the image or some image features / data)

                                                                                                      Source https://stackoverflow.com/questions/71516584

                                                                                                      QUESTION

                                                                                                      Cannot import name '_centered' from 'scipy.signal.signaltools'
                                                                                                      Asked 2022-Mar-22 at 12:29

                                                                                                      Unable to import functions from scipy module.

                                                                                                      Gives error :

                                                                                                      from scipy.signal.signaltools import _centered
                                                                                                      Cannot import name '_centered' from 'scipy.signal.signaltools'
                                                                                                      
                                                                                                      scipy.__version__
                                                                                                      1.8.0
                                                                                                      

                                                                                                      ANSWER

                                                                                                      Answered 2022-Feb-17 at 08:30

                                                                                                      I encountered the same problem while using statsmodels~=0.12.x. Increasing the statsmodels package to version 0.13.2, this import issue is resolved.

                                                                                                      UPDATE with more notes:

                                                                                                      • before:
                                                                                                        • installation of fixed version of statsmodels==0.12.2 which is dependent on scipy
                                                                                                        • there was newly released scipy==1.8.0 - 2022-02-05
                                                                                                          • when installing it, got this problem:
                                                                                                          from statsmodels.tsa.seasonal import seasonal_decompose
                                                                                                        File "/usr/local/lib/python3.8/site-packages/statsmodels/tsa/seasonal.py", line 12, in 
                                                                                                          from statsmodels.tsa.filters.filtertools import convolution_filter
                                                                                                        File "/usr/local/lib/python3.8/site-packages/statsmodels/tsa/filters/filtertools.py", line 18, in 
                                                                                                          from scipy.signal.signaltools import _centered as trim_centered
                                                                                                      ImportError: cannot import name '_centered' from 'scipy.signal.signaltools' (/usr/local/lib/python3.8/site-packages/scipy/signal/signaltools.py)
                                                                                                      
                                                                                                      • after:

                                                                                                        • when bumping up statsmodels to the latest version available 0.13.2 release 2022-02-08, it works
                                                                                                      • If you are not using statsmodels but other package which is dependent on scipy, have a look if there is newer version available (after the release of scipy to v.1.8.0)

                                                                                                      Source https://stackoverflow.com/questions/71106940

                                                                                                      QUESTION

                                                                                                      Installing scipy and scikit-learn on apple m1
                                                                                                      Asked 2022-Mar-22 at 06:21

                                                                                                      The installation on the m1 chip for the following packages: Numpy 1.21.1, pandas 1.3.0, torch 1.9.0 and a few other ones works fine for me. They also seem to work properly while testing them. However when I try to install scipy or scikit-learn via pip this error appears:

                                                                                                      ERROR: Failed building wheel for numpy

                                                                                                      Failed to build numpy

                                                                                                      ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly

                                                                                                      Why should Numpy be build again when I have the latest version from pip already installed?

                                                                                                      Every previous installation was done using python3.9 -m pip install ... on Mac OS 11.3.1 with the apple m1 chip.

                                                                                                      Maybe somebody knows how to deal with this error or if its just a matter of time.

                                                                                                      ANSWER

                                                                                                      Answered 2021-Aug-02 at 14:33

                                                                                                      Please see this note of scikit-learn about

                                                                                                      Installing on Apple Silicon M1 hardware

                                                                                                      The recently introduced macos/arm64 platform (sometimes also known as macos/aarch64) requires the open source community to upgrade the build configuation and automation to properly support it.

                                                                                                      At the time of writing (January 2021), the only way to get a working installation of scikit-learn on this hardware is to install scikit-learn and its dependencies from the conda-forge distribution, for instance using the miniforge installers:

                                                                                                      https://github.com/conda-forge/miniforge

                                                                                                      The following issue tracks progress on making it possible to install scikit-learn from PyPI with pip:

                                                                                                      https://github.com/scikit-learn/scikit-learn/issues/19137

                                                                                                      Source https://stackoverflow.com/questions/68620927

                                                                                                      QUESTION

                                                                                                      Why should I use normalised units in numerical integration?
                                                                                                      Asked 2022-Mar-19 at 10:40

                                                                                                      I was simulating the solar system (Sun, Earth and Moon). When I first started working on the project, I used the base units: meters for distance, seconds for time, and metres per second for velocity. Because I was dealing with the solar system, the numbers were pretty big, for example the distance between the Earth and Sun is 150·10⁹ m.

                                                                                                      When I numerically integrated the system with scipy.solve_ivp, the results were completely wrong. Here is an example of Earth and Moon trajectories.

                                                                                                      But then I got a suggestion from a friend that I should use standardised units: astronomical unit (AU) for distance and years for time. And the simulation started working flawlessly!

                                                                                                      My question is: Why is this a generally valid advice for problems such as mine? (Mind that this is not about my specific problem which was already solved, but rather why the solution worked.)

                                                                                                      ANSWER

                                                                                                      Answered 2021-Jul-25 at 07:42

                                                                                                      Most, if not all integration modules work best out of the box if:

                                                                                                      • your dynamical variables have the same order of magnitude;
                                                                                                      • that order of magnitude is 1;
                                                                                                      • the smallest time scale of your dynamics also has the order of magnitude 1.

                                                                                                      This typically fails for astronomical simulations where the orders of magnitude vary and values as well as time scales are often large in typical units.

                                                                                                      The reason for the above behaviour of integrators is that they use step-size adaption, i.e., the integration step is adjusted to keep the estimated error at a defined level. The step-size adaption in turn is governed by a lot of parameters like absolute tolerance, relative tolerance, minimum time step, etc. You can usually tweak these parameters, but if you don’t, there need to be some default values and these default values are chosen with the above setup in mind.

                                                                                                      Digression

                                                                                                      You might ask yourself: Can these parameters not be chosen more dynamically? As a developer and maintainer of an integration module, I would roughly expect that introducing such automatisms has the following consequences:

                                                                                                      • About twenty in a thousand users will not run into problems like yours.
                                                                                                      • About fifty a thousand users (including the above) miss an opportunity to learn rudimentary knowledge about how integrators work and reading documentations.
                                                                                                      • About one in thousand users will run into a horrible problem with the automatisms that is much more difficult to solve than the above.
                                                                                                      • I need to introduce new parameters governing the automatisms that are even harder to grasp for the average user.
                                                                                                      • I spend a lot of time in devising and implementing the automatisms.

                                                                                                      Source https://stackoverflow.com/questions/68500704

                                                                                                      QUESTION

                                                                                                      How could I speed up my written python code: spheres contact detection (collision) using spatial searching
                                                                                                      Asked 2022-Mar-13 at 15:43

                                                                                                      I am working on a spatial search case for spheres in which I want to find connected spheres. For this aim, I searched around each sphere for spheres that centers are in a (maximum sphere diameter) distance from the searching sphere’s center. At first, I tried to use scipy related methods to do so, but scipy method takes longer times comparing to equivalent numpy method. For scipy, I have determined the number of K-nearest spheres firstly and then find them by cKDTree.query, which lead to more time consumption. However, it is slower than numpy method even by omitting the first step with a constant value (it is not good to omit the first step in this case). It is contrary to my expectations about scipy spatial searching speed. So, I tried to use some list-loops instead some numpy lines for speeding up using numba prange. Numba run the code a little faster, but I believe that this code can be optimized for better performances, perhaps by vectorization, using other alternative numpy modules or using numba in another way. I have used iteration on all spheres due to prevent probable memory leaks and …, where number of spheres are high.

                                                                                                      import numpy as np
                                                                                                      import numba as nb
                                                                                                      from scipy.spatial import cKDTree, distance
                                                                                                      
                                                                                                      # ---------------------------- input data ----------------------------
                                                                                                      """ For testing by prepared files:
                                                                                                      radii = np.load('a.npy')     # shape: (n-spheres, )     must be loaded by np.load('a.npy') or np.loadtxt('radii_large.csv')
                                                                                                      poss = np.load('b.npy')      # shape: (n-spheres, 3)    must be loaded by np.load('b.npy') or np.loadtxt('pos_large.csv', delimiter=',')
                                                                                                      """
                                                                                                      
                                                                                                      rnd = np.random.RandomState(70)
                                                                                                      data_volume = 200000
                                                                                                      
                                                                                                      radii = rnd.uniform(0.0005, 0.122, data_volume)
                                                                                                      dia_max = 2 * radii.max()
                                                                                                      
                                                                                                      x = rnd.uniform(-1.02, 1.02, (data_volume, 1))
                                                                                                      y = rnd.uniform(-3.52, 3.52, (data_volume, 1))
                                                                                                      z = rnd.uniform(-1.02, -0.575, (data_volume, 1))
                                                                                                      poss = np.hstack((x, y, z))
                                                                                                      # --------------------------------------------------------------------
                                                                                                      
                                                                                                      # @nb.jit('float64[:,::1](float64[:,::1], float64[::1])', forceobj=True, parallel=True)
                                                                                                      def ends_gap(poss, dia_max):
                                                                                                          particle_corsp_overlaps = np.array([], dtype=np.float64)
                                                                                                          ends_ind = np.empty([1, 2], dtype=np.int64)
                                                                                                          """ using list looping """
                                                                                                          # particle_corsp_overlaps = []
                                                                                                          # ends_ind = []
                                                                                                      
                                                                                                          # for particle_idx in nb.prange(len(poss)):  # by list looping
                                                                                                          for particle_idx in range(len(poss)):
                                                                                                              unshared_idx = np.delete(np.arange(len(poss)), particle_idx)                                                    # <--- relatively high time consumer
                                                                                                              poss_without = poss[unshared_idx]
                                                                                                      
                                                                                                              """ # SCIPY method ---------------------------------------------------------------------------------------------
                                                                                                              nears_i_ind = cKDTree(poss_without).query_ball_point(poss[particle_idx], r=dia_max)         # <--- high time consumer
                                                                                                              if len(nears_i_ind) > 0:
                                                                                                                  dist_i, dist_i_ind = cKDTree(poss_without[nears_i_ind]).query(poss[particle_idx], k=len(nears_i_ind))       # <--- high time consumer
                                                                                                                  if not isinstance(dist_i, float):
                                                                                                                      dist_i[dist_i_ind] = dist_i.copy()
                                                                                                              """  # NUMPY method --------------------------------------------------------------------------------------------
                                                                                                              lx_limit_idx = poss_without[:, 0] <= poss[particle_idx][0] + dia_max
                                                                                                              ux_limit_idx = poss_without[:, 0] >= poss[particle_idx][0] - dia_max
                                                                                                              ly_limit_idx = poss_without[:, 1] <= poss[particle_idx][1] + dia_max
                                                                                                              uy_limit_idx = poss_without[:, 1] >= poss[particle_idx][1] - dia_max
                                                                                                              lz_limit_idx = poss_without[:, 2] <= poss[particle_idx][2] + dia_max
                                                                                                              uz_limit_idx = poss_without[:, 2] >= poss[particle_idx][2] - dia_max
                                                                                                      
                                                                                                              nears_i_ind = np.where(lx_limit_idx & ux_limit_idx & ly_limit_idx & uy_limit_idx & lz_limit_idx & uz_limit_idx)[0]
                                                                                                              if len(nears_i_ind) > 0:
                                                                                                                  dist_i = distance.cdist(poss_without[nears_i_ind], poss[particle_idx][None, :]).squeeze()                   # <--- relatively high time consumer
                                                                                                              # """  # -------------------------------------------------------------------------------------------------------
                                                                                                                  contact_check = dist_i - (radii[unshared_idx][nears_i_ind] + radii[particle_idx])
                                                                                                                  connected = contact_check[contact_check <= 0]
                                                                                                      
                                                                                                                  particle_corsp_overlaps = np.concatenate((particle_corsp_overlaps, connected))
                                                                                                                  """ using list looping """
                                                                                                                  # if len(connected) > 0:
                                                                                                                  #    for value_ in connected:
                                                                                                                  #        particle_corsp_overlaps.append(value_)
                                                                                                      
                                                                                                                  contacts_ind = np.where([contact_check <= 0])[1]
                                                                                                                  contacts_sec_ind = np.array(nears_i_ind)[contacts_ind]
                                                                                                                  sphere_olps_ind = np.where((poss[:, None] == poss_without[contacts_sec_ind][None, :]).all(axis=2))[0]       # <--- high time consumer
                                                                                                      
                                                                                                                  ends_ind_mod_temp = np.array([np.repeat(particle_idx, len(sphere_olps_ind)), sphere_olps_ind], dtype=np.int64).T
                                                                                                                  if particle_idx > 0:
                                                                                                                      ends_ind = np.concatenate((ends_ind, ends_ind_mod_temp))
                                                                                                                  else:
                                                                                                                      ends_ind[0, 0], ends_ind[0, 1] = ends_ind_mod_temp[0, 0], ends_ind_mod_temp[0, 1]
                                                                                                                  """ using list looping """
                                                                                                                  # for contacted_idx in sphere_olps_ind:
                                                                                                                  #    ends_ind.append([particle_idx, contacted_idx])
                                                                                                      
                                                                                                          # ends_ind_org = np.array(ends_ind)  # using lists
                                                                                                          ends_ind_org = ends_ind
                                                                                                          ends_ind, ends_ind_idx = np.unique(np.sort(ends_ind_org), axis=0, return_index=True)                                # <--- relatively high time consumer
                                                                                                          gap = np.array(particle_corsp_overlaps)[ends_ind_idx]
                                                                                                          return gap, ends_ind, ends_ind_idx, ends_ind_org
                                                                                                      

                                                                                                      In one of my tests on 23000 spheres, scipy, numpy, and numba-aided methods finished the loop in about 400, 200, and 180 seconds correspondingly using Colab TPU; for 500.000 spheres it take 3.5 hours. These execution times are not satisfying at all for my project, where number of spheres may be up to 1.000.000 in a medium data volume. I will call this code many times in my main code and seeking for ways that could perform this code in milliseconds (as much as fastest that it could). Is it possible?? I would be appreciated if anyone would speed up the code as it is needed.

                                                                                                      Notes:

                                                                                                      • This code must be executable with python 3.7+, on CPU and GPU.
                                                                                                      • This code must be applicable for data size, at least, 300.000 spheres.
                                                                                                      • All numpy, scipy, and … equivalent modules instead of my written modules, which make my code faster significantly, will be upvoted.

                                                                                                      I would be appreciated for any recommendations or explanations about:

                                                                                                      1. Which method could be faster in this subject?
                                                                                                      2. Why scipy is not faster than other methods in this case and where it could be helpful relating to this subject?
                                                                                                      3. Choosing between iterator methods and matrix form methods is a confusing matter for me. Iterating methods use less memory and could be used and tuned up by numba and … but, I think, are not useful and comparable with matrix methods (which depends on memory limits) like numpy and … for huge sphere numbers. For this case, perhaps I could omit the iteration by numpy, but I guess strongly that it cannot be handled due to huge matrix size operations and memory leaks.

                                                                                                      Prepared sample test data:

                                                                                                      Poss data: 23000, 500000
                                                                                                      Radii data: 23000, 500000
                                                                                                      Line by line speed test logs: for two test cases scipy method and numpy time consumption.

                                                                                                      ANSWER

                                                                                                      Answered 2022-Feb-14 at 10:23

                                                                                                      Have you tried FLANN?

                                                                                                      This code doesn't solve your problem completely. It simply finds the nearest 50 neighbors to each point in your 500000 point dataset:

                                                                                                      from pyflann import FLANN
                                                                                                      
                                                                                                      p = np.loadtxt("pos_large.csv", delimiter=",")
                                                                                                      flann = FLANN()
                                                                                                      flann.build_index(pts=p)
                                                                                                      idx, dist = flann.nn_index(qpts=p, num_neighbors=50)
                                                                                                      

                                                                                                      The last line takes less than a second in my laptop without any tuning or parallelization.

                                                                                                      Source https://stackoverflow.com/questions/71104627

                                                                                                      QUESTION

                                                                                                      Colab: (0) UNIMPLEMENTED: DNN library is not found
                                                                                                      Asked 2022-Feb-08 at 19:27

                                                                                                      I have pretrained model for object detection (Google Colab + TensorFlow) inside Google Colab and I run it two-three times per week for new images I have and everything was fine for the last year till this week. Now when I try to run model I have this message:

                                                                                                      Graph execution error:
                                                                                                      
                                                                                                      2 root error(s) found.
                                                                                                        (0) UNIMPLEMENTED:  DNN library is not found.
                                                                                                           [[{{node functional_1/conv1_conv/Conv2D}}]]
                                                                                                           [[StatefulPartitionedCall/SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Reshape_5/_126]]
                                                                                                        (1) UNIMPLEMENTED:  DNN library is not found.
                                                                                                           [[{{node functional_1/conv1_conv/Conv2D}}]]
                                                                                                      0 successful operations.
                                                                                                      0 derived errors ignored. [Op:__inference_restored_function_body_27380] ***
                                                                                                      

                                                                                                      Never happended before.

                                                                                                      Before I can run my model I have to install Tensor Flow object detection API with this command:

                                                                                                      import os
                                                                                                      
                                                                                                      os.chdir('/project/models/research')
                                                                                                      
                                                                                                      !protoc object_detection/protos/*.proto --python_out=.
                                                                                                      !cp object_detection/packages/tf2/setup.py .
                                                                                                      !python -m pip install .
                                                                                                      

                                                                                                      This is the output of command:

                                                                                                      Processing /content/gdrive/MyDrive/models/research
                                                                                                        DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
                                                                                                         pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
                                                                                                      Collecting avro-python3
                                                                                                        Downloading avro-python3-1.10.2.tar.gz (38 kB)
                                                                                                      Collecting apache-beam
                                                                                                        Downloading apache_beam-2.35.0-cp37-cp37m-manylinux2010_x86_64.whl (9.9 MB)
                                                                                                           |████████████████████████████████| 9.9 MB 1.6 MB/s
                                                                                                      Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (7.1.2)
                                                                                                      Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (4.2.6)
                                                                                                      Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (3.2.2)
                                                                                                      Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.29.27)
                                                                                                      Requirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.5.5)
                                                                                                      Collecting tf-slim
                                                                                                        Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
                                                                                                           |████████████████████████████████| 352 kB 50.5 MB/s
                                                                                                      Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.15.0)
                                                                                                      Requirement already satisfied: pycocotools in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.0.4)
                                                                                                      Collecting lvis
                                                                                                        Downloading lvis-0.5.3-py3-none-any.whl (14 kB)
                                                                                                      Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.4.1)
                                                                                                      Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.3.5)
                                                                                                      Collecting tf-models-official>=2.5.1
                                                                                                        Downloading tf_models_official-2.8.0-py2.py3-none-any.whl (2.2 MB)
                                                                                                           |████████████████████████████████| 2.2 MB 38.3 MB/s
                                                                                                      Collecting tensorflow_io
                                                                                                        Downloading tensorflow_io-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (23.4 MB)
                                                                                                           |████████████████████████████████| 23.4 MB 1.7 MB/s
                                                                                                      Requirement already satisfied: keras in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.7.0)
                                                                                                      Collecting opencv-python-headless
                                                                                                        Downloading opencv_python_headless-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.7 MB)
                                                                                                           |████████████████████████████████| 47.7 MB 74 kB/s
                                                                                                      Collecting sacrebleu
                                                                                                        Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB)
                                                                                                           |████████████████████████████████| 90 kB 10.4 MB/s
                                                                                                      Requirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.5.12)
                                                                                                      Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (5.4.8)
                                                                                                      Requirement already satisfied: oauth2client in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.1.3)
                                                                                                      Collecting tensorflow-addons
                                                                                                        Downloading tensorflow_addons-0.15.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
                                                                                                           |████████████████████████████████| 1.1 MB 37.8 MB/s
                                                                                                      Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.5.0)
                                                                                                      Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.0.1)
                                                                                                      Collecting sentencepiece
                                                                                                        Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
                                                                                                           |████████████████████████████████| 1.2 MB 37.5 MB/s
                                                                                                      Collecting tensorflow-model-optimization>=0.4.1
                                                                                                        Downloading tensorflow_model_optimization-0.7.0-py2.py3-none-any.whl (213 kB)
                                                                                                           |████████████████████████████████| 213 kB 42.7 MB/s
                                                                                                      Collecting pyyaml<6.0,>=5.1
                                                                                                        Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
                                                                                                           |████████████████████████████████| 636 kB 53.3 MB/s
                                                                                                      Collecting tensorflow-text~=2.8.0
                                                                                                        Downloading tensorflow_text-2.8.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.9 MB)
                                                                                                           |████████████████████████████████| 4.9 MB 46.1 MB/s
                                                                                                      Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.12.10)
                                                                                                      Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.19.5)
                                                                                                      Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.12.0)
                                                                                                      Collecting seqeval
                                                                                                        Downloading seqeval-1.2.2.tar.gz (43 kB)
                                                                                                           |████████████████████████████████| 43 kB 2.1 MB/s
                                                                                                      Collecting tensorflow~=2.8.0
                                                                                                        Downloading tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl (497.5 MB)
                                                                                                           |████████████████████████████████| 497.5 MB 28 kB/s
                                                                                                      Collecting py-cpuinfo>=3.3.0
                                                                                                        Downloading py-cpuinfo-8.0.0.tar.gz (99 kB)
                                                                                                           |████████████████████████████████| 99 kB 10.1 MB/s
                                                                                                      Requirement already satisfied: google-auth<3dev,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.35.0)
                                                                                                      Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.1)
                                                                                                      Requirement already satisfied: httplib2<1dev,>=0.15.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.17.4)
                                                                                                      Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.0.4)
                                                                                                      Requirement already satisfied: google-api-core<3dev,>=1.21.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.26.3)
                                                                                                      Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (57.4.0)
                                                                                                      Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2018.9)
                                                                                                      Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.54.0)
                                                                                                      Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.23.0)
                                                                                                      Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (21.3)
                                                                                                      Requirement already satisfied: protobuf>=3.12.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.17.3)
                                                                                                      Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.2.8)
                                                                                                      Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.8)
                                                                                                      Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.2.4)
                                                                                                      Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2021.10.8)
                                                                                                      Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.24.3)
                                                                                                      Requirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2.8.2)
                                                                                                      Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (4.62.3)
                                                                                                      Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (5.0.2)
                                                                                                      Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.7)
                                                                                                      Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.4.8)
                                                                                                      Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.10)
                                                                                                      Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.4)
                                                                                                      Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0)
                                                                                                      Requirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (13.0.0)
                                                                                                      Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
                                                                                                      Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.6.3)
                                                                                                      Requirement already satisfied: gast>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.0)
                                                                                                      Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.2.0)
                                                                                                      Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.10.0.2)
                                                                                                      Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.13.3)
                                                                                                      Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.23.1)
                                                                                                      Collecting tf-estimator-nightly==2.8.0.dev2021122109
                                                                                                        Downloading tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB)
                                                                                                           |████████████████████████████████| 462 kB 49.5 MB/s
                                                                                                      Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.2)
                                                                                                      Collecting tensorboard<2.9,>=2.8
                                                                                                        Downloading tensorboard-2.8.0-py3-none-any.whl (5.8 MB)
                                                                                                           |████████████████████████████████| 5.8 MB 41.2 MB/s
                                                                                                      Requirement already satisfied: flatbuffers>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (2.0)
                                                                                                      Collecting keras
                                                                                                        Downloading keras-2.8.0-py2.py3-none-any.whl (1.4 MB)
                                                                                                           |████████████████████████████████| 1.4 MB 41.2 MB/s
                                                                                                      Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.0)
                                                                                                      Collecting numpy>=1.15.4
                                                                                                        Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
                                                                                                           |████████████████████████████████| 15.7 MB 41.4 MB/s
                                                                                                      Requirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.0)
                                                                                                      Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.43.0)
                                                                                                      Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse>=1.6.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.37.1)
                                                                                                      Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.5.2)
                                                                                                      Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.6.1)
                                                                                                      Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.1)
                                                                                                      Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.6)
                                                                                                      Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.8.1)
                                                                                                      Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.6)
                                                                                                      Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.3.1)
                                                                                                      Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (4.10.1)
                                                                                                      Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.7.0)
                                                                                                      Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.2.0)
                                                                                                      Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-official>=2.5.1->object-detection==0.1) (0.1.6)
                                                                                                      Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.7)
                                                                                                      Collecting fastavro<2,>=0.21.4
                                                                                                        Downloading fastavro-1.4.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)
                                                                                                           |████████████████████████████████| 2.3 MB 38.1 MB/s
                                                                                                      Requirement already satisfied: pyarrow<7.0.0,>=0.15.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (6.0.1)
                                                                                                      Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.3.0)
                                                                                                      Collecting proto-plus<2,>=1.7.1
                                                                                                        Downloading proto_plus-1.19.9-py3-none-any.whl (45 kB)
                                                                                                           |████████████████████████████████| 45 kB 3.2 MB/s
                                                                                                      Collecting requests<3.0.0dev,>=2.18.0
                                                                                                        Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
                                                                                                           |████████████████████████████████| 63 kB 1.8 MB/s
                                                                                                      Collecting dill<0.3.2,>=0.3.1.1
                                                                                                        Downloading dill-0.3.1.1.tar.gz (151 kB)
                                                                                                           |████████████████████████████████| 151 kB 44.4 MB/s
                                                                                                      Collecting numpy>=1.15.4
                                                                                                        Downloading numpy-1.20.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.3 MB)
                                                                                                           |████████████████████████████████| 15.3 MB 21.1 MB/s
                                                                                                      Collecting orjson<4.0
                                                                                                        Downloading orjson-3.6.6-cp37-cp37m-manylinux_2_24_x86_64.whl (245 kB)
                                                                                                           |████████████████████████████████| 245 kB 53.2 MB/s
                                                                                                      Collecting hdfs<3.0.0,>=2.1.0
                                                                                                        Downloading hdfs-2.6.0-py3-none-any.whl (33 kB)
                                                                                                      Collecting pymongo<4.0.0,>=3.8.0
                                                                                                        Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB)
                                                                                                           |████████████████████████████████| 508 kB 44.3 MB/s
                                                                                                      Requirement already satisfied: docopt in /usr/local/lib/python3.7/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (0.6.2)
                                                                                                      Collecting protobuf>=3.12.0
                                                                                                        Downloading protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
                                                                                                           |████████████████████████████████| 1.1 MB 47.3 MB/s
                                                                                                      Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.0.11)
                                                                                                      Requirement already satisfied: opencv-python>=4.1.0.25 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (4.1.2.30)
                                                                                                      Requirement already satisfied: cycler>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (0.11.0)
                                                                                                      Requirement already satisfied: kiwisolver>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (1.3.2)
                                                                                                      Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.3)
                                                                                                      Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (2019.12.20)
                                                                                                      Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (0.8.9)
                                                                                                      Collecting portalocker
                                                                                                        Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
                                                                                                      Collecting colorama
                                                                                                        Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)
                                                                                                      Requirement already satisfied: scikit-learn>=0.21.3 in /usr/local/lib/python3.7/dist-packages (from seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.0.2)
                                                                                                      Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0)
                                                                                                      Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
                                                                                                      Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons->tf-models-official>=2.5.1->object-detection==0.1) (2.7.1)
                                                                                                      Requirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (2.3)
                                                                                                      Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (0.16.0)
                                                                                                      Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (21.4.0)
                                                                                                      Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (5.4.0)
                                                                                                      Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (1.6.0)
                                                                                                      Collecting tensorflow-io-gcs-filesystem>=0.23.1
                                                                                                        Downloading tensorflow_io_gcs_filesystem-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.1 MB)
                                                                                                           |████████████████████████████████| 2.1 MB 40.9 MB/s
                                                                                                      Building wheels for collected packages: object-detection, py-cpuinfo, dill, avro-python3, seqeval
                                                                                                        Building wheel for object-detection (setup.py) ... done
                                                                                                        Created wheel for object-detection: filename=object_detection-0.1-py3-none-any.whl size=1686316 sha256=775b8c34c800b3b3139d1067abd686af9ce9158011fccfb5450ccfd9bf424a5a
                                                                                                        Stored in directory: /tmp/pip-ephem-wheel-cache-rmw0fvil/wheels/d0/e3/e9/b9ffe85019ec441e90d8ff9eddee9950c4c23b7598204390b9
                                                                                                        Building wheel for py-cpuinfo (setup.py) ... done
                                                                                                        Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=ac956c4c039868fdba78645bea056754e667e8840bea783ad2ca75e4d3e682c6
                                                                                                        Stored in directory: /root/.cache/pip/wheels/d2/f1/1f/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401
                                                                                                        Building wheel for dill (setup.py) ... done
                                                                                                        Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=d9c6cdfd69aea2b4d78e6afbbe2bc530394e4081eb186eb4f4cd02373ca739fd
                                                                                                        Stored in directory: /root/.cache/pip/wheels/a4/61/fd/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f
                                                                                                        Building wheel for avro-python3 (setup.py) ... done
                                                                                                        Created wheel for avro-python3: filename=avro_python3-1.10.2-py3-none-any.whl size=44010 sha256=4eca8b4f30e4850d5dabccee36c40c8dda8a6c7e7058cfb7f0258eea5ce7b2b3
                                                                                                        Stored in directory: /root/.cache/pip/wheels/d6/e5/b1/6b151d9b535ee50aaa6ab27d145a0104b6df02e5636f0376da
                                                                                                        Building wheel for seqeval (setup.py) ... done
                                                                                                        Created wheel for seqeval: filename=seqeval-1.2.2-py3-none-any.whl size=16180 sha256=0ddfa46d0e36e9be346a90833ef11cc0d38cc7e744be34c5a0d321f997a30cba
                                                                                                        Stored in directory: /root/.cache/pip/wheels/05/96/ee/7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
                                                                                                      Successfully built object-detection py-cpuinfo dill avro-python3 seqeval
                                                                                                      Installing collected packages: requests, protobuf, numpy, tf-estimator-nightly, tensorflow-io-gcs-filesystem, tensorboard, keras, tensorflow, portalocker, dill, colorama, tf-slim, tensorflow-text, tensorflow-model-optimization, tensorflow-addons, seqeval, sentencepiece, sacrebleu, pyyaml, pymongo, py-cpuinfo, proto-plus, orjson, opencv-python-headless, hdfs, fastavro, tf-models-official, tensorflow-io, lvis, avro-python3, apache-beam, object-detection
                                                                                                        Attempting uninstall: requests
                                                                                                          Found existing installation: requests 2.23.0
                                                                                                          Uninstalling requests-2.23.0:
                                                                                                            Successfully uninstalled requests-2.23.0
                                                                                                        Attempting uninstall: protobuf
                                                                                                          Found existing installation: protobuf 3.17.3
                                                                                                          Uninstalling protobuf-3.17.3:
                                                                                                            Successfully uninstalled protobuf-3.17.3
                                                                                                        Attempting uninstall: numpy
                                                                                                          Found existing installation: numpy 1.19.5
                                                                                                          Uninstalling numpy-1.19.5:
                                                                                                            Successfully uninstalled numpy-1.19.5
                                                                                                        Attempting uninstall: tensorflow-io-gcs-filesystem
                                                                                                          Found existing installation: tensorflow-io-gcs-filesystem 0.23.1
                                                                                                          Uninstalling tensorflow-io-gcs-filesystem-0.23.1:
                                                                                                            Successfully uninstalled tensorflow-io-gcs-filesystem-0.23.1
                                                                                                        Attempting uninstall: tensorboard
                                                                                                          Found existing installation: tensorboard 2.7.0
                                                                                                          Uninstalling tensorboard-2.7.0:
                                                                                                            Successfully uninstalled tensorboard-2.7.0
                                                                                                        Attempting uninstall: keras
                                                                                                          Found existing installation: keras 2.7.0
                                                                                                          Uninstalling keras-2.7.0:
                                                                                                            Successfully uninstalled keras-2.7.0
                                                                                                        Attempting uninstall: tensorflow
                                                                                                          Found existing installation: tensorflow 2.7.0
                                                                                                          Uninstalling tensorflow-2.7.0:
                                                                                                            Successfully uninstalled tensorflow-2.7.0
                                                                                                        Attempting uninstall: dill
                                                                                                          Found existing installation: dill 0.3.4
                                                                                                          Uninstalling dill-0.3.4:
                                                                                                            Successfully uninstalled dill-0.3.4
                                                                                                        Attempting uninstall: pyyaml
                                                                                                          Found existing installation: PyYAML 3.13
                                                                                                          Uninstalling PyYAML-3.13:
                                                                                                            Successfully uninstalled PyYAML-3.13
                                                                                                        Attempting uninstall: pymongo
                                                                                                          Found existing installation: pymongo 4.0.1
                                                                                                          Uninstalling pymongo-4.0.1:
                                                                                                            Successfully uninstalled pymongo-4.0.1
                                                                                                      ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
                                                                                                      yellowbrick 1.3.post1 requires numpy<1.20,>=1.16.0, but you have numpy 1.20.3 which is incompatible.
                                                                                                      multiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.1.1 which is incompatible.
                                                                                                      google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.27.1 which is incompatible.
                                                                                                      datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
                                                                                                      albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.
                                                                                                      Successfully installed apache-beam-2.35.0 avro-python3-1.10.2 colorama-0.4.4 dill-0.3.1.1 fastavro-1.4.9 hdfs-2.6.0 keras-2.8.0 lvis-0.5.3 numpy-1.20.3 object-detection-0.1 opencv-python-headless-4.5.5.62 orjson-3.6.6 portalocker-2.3.2 proto-plus-1.19.9 protobuf-3.19.4 py-cpuinfo-8.0.0 pymongo-3.12.3 pyyaml-5.4.1 requests-2.27.1 sacrebleu-2.0.0 sentencepiece-0.1.96 seqeval-1.2.2 tensorboard-2.8.0 tensorflow-2.8.0 tensorflow-addons-0.15.0 tensorflow-io-0.24.0 tensorflow-io-gcs-filesystem-0.24.0 tensorflow-model-optimization-0.7.0 tensorflow-text-2.8.1 tf-estimator-nightly-2.8.0.dev2021122109 tf-models-official-2.8.0 tf-slim-1.1.0
                                                                                                      

                                                                                                      I am noticing that this command uninstalling tensorflow 2.7 and installing tensorflow 2.8. I am not sure it was happening before. Maybe it's the reason DNN library link is missing o something?

                                                                                                      I can confirm these:

                                                                                                      1. Nothing was changed inside pretrained model or already installed model or object_detection source files I downloaded a year ago.
                                                                                                      2. I tried to run command !pip install dnn - not working
                                                                                                      3. I tried to restart runtime (without disconnecting) - not working

                                                                                                      Somebody can help? Thanks.

                                                                                                      ANSWER

                                                                                                      Answered 2022-Feb-07 at 09:19

                                                                                                      It happened the same to me last friday. I think it has something to do with Cuda instalation in Google Colab but I don't know exactly the reason

                                                                                                      Source https://stackoverflow.com/questions/71000120

                                                                                                      QUESTION

                                                                                                      Cannot find conda info. Please verify your conda installation on EMR
                                                                                                      Asked 2022-Feb-05 at 00:17

                                                                                                      I am trying to install conda on EMR and below is my bootstrap script, it looks like conda is getting installed but it is not getting added to environment variable. When I manually update the $PATH variable on EMR master node, it can identify conda. I want to use conda on Zeppelin.

                                                                                                      I also tried adding condig into configuration like below while launching my EMR instance however I still get the below mentioned error.

                                                                                                          "classification": "spark-env",
                                                                                                          "properties": {
                                                                                                              "conda": "/home/hadoop/conda/bin"
                                                                                                          }
                                                                                                      
                                                                                                      [hadoop@ip-172-30-5-150 ~]$ PATH=/home/hadoop/conda/bin:$PATH
                                                                                                      [hadoop@ip-172-30-5-150 ~]$ conda
                                                                                                      usage: conda [-h] [-V] command ...
                                                                                                      
                                                                                                      conda is a tool for managing and deploying applications, environments and packages.
                                                                                                      
                                                                                                      #!/usr/bin/env bash
                                                                                                      
                                                                                                      
                                                                                                      # Install conda
                                                                                                      wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /home/hadoop/miniconda.sh \
                                                                                                          && /bin/bash ~/miniconda.sh -b -p $HOME/conda
                                                                                                      
                                                                                                      
                                                                                                      conda config --set always_yes yes --set changeps1 no
                                                                                                      conda install conda=4.2.13
                                                                                                      conda config -f --add channels conda-forge
                                                                                                      rm ~/miniconda.sh
                                                                                                      echo bootstrap_conda.sh completed. PATH now: $PATH
                                                                                                      export PYSPARK_PYTHON="/home/hadoop/conda/bin/python3.5"
                                                                                                      
                                                                                                      echo -e '\nexport PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
                                                                                                      
                                                                                                      
                                                                                                      conda create -n zoo python=3.7 # "zoo" is conda environment name, you can use any name you like.
                                                                                                      conda activate zoo
                                                                                                      sudo pip3 install tensorflow
                                                                                                      sudo pip3 install boto3
                                                                                                      sudo pip3 install botocore
                                                                                                      sudo pip3 install numpy
                                                                                                      sudo pip3 install pandas
                                                                                                      sudo pip3 install scipy
                                                                                                      sudo pip3 install s3fs
                                                                                                      sudo pip3 install matplotlib
                                                                                                      sudo pip3 install -U tqdm
                                                                                                      sudo pip3 install -U scikit-learn
                                                                                                      sudo pip3 install -U scikit-multilearn
                                                                                                      sudo pip3 install xlutils
                                                                                                      sudo pip3 install natsort
                                                                                                      sudo pip3 install pydot
                                                                                                      sudo pip3 install python-pydot
                                                                                                      sudo pip3 install python-pydot-ng
                                                                                                      sudo pip3 install pydotplus
                                                                                                      sudo pip3 install h5py
                                                                                                      sudo pip3 install graphviz
                                                                                                      sudo pip3 install recmetrics
                                                                                                      sudo pip3 install openpyxl
                                                                                                      sudo pip3 install xlrd
                                                                                                      sudo pip3 install xlwt
                                                                                                      sudo pip3 install tensorflow.io
                                                                                                      sudo pip3 install Cython
                                                                                                      sudo pip3 install ray
                                                                                                      sudo pip3 install zoo
                                                                                                      sudo pip3 install analytics-zoo
                                                                                                      sudo pip3 install analytics-zoo[ray]
                                                                                                      #sudo /usr/bin/pip-3.6 install -U imbalanced-learn
                                                                                                      
                                                                                                      
                                                                                                      

                                                                                                      ANSWER

                                                                                                      Answered 2022-Feb-05 at 00:17

                                                                                                      I got the conda working by modifying the script as below, emr python versions were colliding with the conda version.:

                                                                                                      wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh  -O /home/hadoop/miniconda.sh \
                                                                                                          && /bin/bash ~/miniconda.sh -b -p $HOME/conda
                                                                                                      
                                                                                                      echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
                                                                                                      
                                                                                                      
                                                                                                      conda config --set always_yes yes --set changeps1 no
                                                                                                      conda config -f --add channels conda-forge
                                                                                                      
                                                                                                      
                                                                                                      conda create -n zoo python=3.7 # "zoo" is conda environment name
                                                                                                      conda init bash
                                                                                                      source activate zoo
                                                                                                      conda install python 3.7.0 -c conda-forge orca 
                                                                                                      sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
                                                                                                      

                                                                                                      and setting zeppelin python and pyspark parameters to:

                                                                                                      “spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
                                                                                                      "spark.pyspark.virtualenv.enabled": "true",
                                                                                                      "spark.pyspark.virtualenv.type":"native",
                                                                                                      "spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
                                                                                                      "zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
                                                                                                      "zeppelin.python": "/home/hadoop/conda/bin/python"
                                                                                                      

                                                                                                      Orca only support TF upto 1.5 hence it was not working as I am using TF2.

                                                                                                      Source https://stackoverflow.com/questions/70901724

                                                                                                      QUESTION

                                                                                                      Edge weight in networkx
                                                                                                      Asked 2022-Feb-02 at 14:20

                                                                                                      How do I assign to each edge a weight equals to the number of times node i and j interacted from an edge list?

                                                                                                      import pandas as pd
                                                                                                      import numpy as np
                                                                                                      import matplotlib.pyplot as plt
                                                                                                      import networkx as nx
                                                                                                      import scipy.sparse
                                                                                                      
                                                                                                      df = pd.read_csv("thiers_2011.csv", header = None)
                                                                                                      df = df.rename(columns={0: "t", 1: "id1", 2: "id2", 3: "C1", 4: "C2"})
                                                                                                      
                                                                                                      edge_list = np.zeros((len(df),2))
                                                                                                      edge_list[:,0] = np.array(df["id1"]) 
                                                                                                      edge_list[:,1] = np.array(df["id2"]) 
                                                                                                      
                                                                                                      G = nx.Graph()
                                                                                                      G.add_edges_from(edge_list)
                                                                                                      

                                                                                                      ANSWER

                                                                                                      Answered 2022-Feb-02 at 14:20

                                                                                                      You can first aggregate the pandas tables to have a weight column, and then load it to networkx with that edge column:

                                                                                                      df["weight"] = 1.0
                                                                                                      df = df.groupby([]).agg({"wight": sum}).reset_index()
                                                                                                      

                                                                                                      To load it you can use also from_pandas_edgelist:

                                                                                                      G = nx.from_pandas_edgelist(source='source_column', target='target_column', edge_attr="weight")
                                                                                                      

                                                                                                      Source https://stackoverflow.com/questions/70956465

                                                                                                      QUESTION

                                                                                                      ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
                                                                                                      Asked 2022-Jan-28 at 03:50

                                                                                                      Error while installing manimce, I have been trying to install manimce library on windows subsystem for linux and after running

                                                                                                      pip install manimce
                                                                                                      Collecting manimce
                                                                                                        Downloading manimce-0.1.1.post2-py3-none-any.whl (249 kB)
                                                                                                           |████████████████████████████████| 249 kB 257 kB/s
                                                                                                      Collecting Pillow
                                                                                                        Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
                                                                                                      Collecting scipy
                                                                                                        Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB)
                                                                                                      Collecting colour
                                                                                                        Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB)
                                                                                                      Collecting pangocairocffi<0.5.0,>=0.4.0
                                                                                                        Downloading pangocairocffi-0.4.0.tar.gz (17 kB)
                                                                                                        Preparing metadata (setup.py) ... done
                                                                                                      Collecting numpy
                                                                                                        Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
                                                                                                      Collecting pydub
                                                                                                        Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
                                                                                                      Collecting pygments
                                                                                                        Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB)
                                                                                                      Collecting cairocffi<2.0.0,>=1.1.0
                                                                                                        Downloading cairocffi-1.3.0.tar.gz (88 kB)
                                                                                                           |████████████████████████████████| 88 kB 160 kB/s
                                                                                                        Preparing metadata (setup.py) ... done
                                                                                                      Collecting tqdm
                                                                                                        Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
                                                                                                      Collecting pangocffi<0.9.0,>=0.8.0
                                                                                                        Downloading pangocffi-0.8.0.tar.gz (33 kB)
                                                                                                        Preparing metadata (setup.py) ... done
                                                                                                      Collecting pycairo<2.0,>=1.19
                                                                                                        Using cached pycairo-1.20.1.tar.gz (344 kB)
                                                                                                      
                                                                                                        Installing build dependencies ... done
                                                                                                        Getting requirements to build wheel ... done
                                                                                                        Preparing metadata (pyproject.toml) ... done
                                                                                                      Collecting progressbar
                                                                                                        Downloading progressbar-2.5.tar.gz (10 kB)
                                                                                                        Preparing metadata (setup.py) ... done
                                                                                                      Collecting rich<7.0,>=6.0
                                                                                                        Using cached rich-6.2.0-py3-none-any.whl (150 kB)
                                                                                                      Collecting cffi>=1.1.0
                                                                                                        Using cached cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB)
                                                                                                      Collecting commonmark<0.10.0,>=0.9.0
                                                                                                        Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
                                                                                                      Collecting typing-extensions<4.0.0,>=3.7.4
                                                                                                        Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
                                                                                                      Collecting colorama<0.5.0,>=0.4.0
                                                                                                        Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB)
                                                                                                      Collecting pycparser
                                                                                                        Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
                                                                                                      Building wheels for collected packages: cairocffi, pangocairocffi, pangocffi, pycairo, progressbar
                                                                                                        Building wheel for cairocffi (setup.py) ... done
                                                                                                        Created wheel for cairocffi: filename=cairocffi-1.3.0-py3-none-any.whl size=89650 sha256=afc73218cc9fa1d844d7165f598e2be0428598166b4c3ed9de5bbdc94a0a6977
                                                                                                        Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/f3/97/83/8022b9237866102e18d1b7ac0a269769e6fccba0f63dceb9b7
                                                                                                        Building wheel for pangocairocffi (setup.py) ... done
                                                                                                        Created wheel for pangocairocffi: filename=pangocairocffi-0.4.0-py3-none-any.whl size=19283 sha256=54399796259c6e24f9ab56c5747ab273dcf97fb6fed3e7b54935f9ac49351d50
                                                                                                        Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/60/58/92/507a12a5044f7fcda6f4dfd8e0a607cc1fe957bc0dea885906
                                                                                                        Building wheel for pangocffi (setup.py) ... done
                                                                                                        Created wheel for pangocffi: filename=pangocffi-0.8.0-py3-none-any.whl size=37899 sha256=bea348af93696816b046dd901aa60d29a464460c5faac67628eb7e1ea7d1807d
                                                                                                        Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/c4/df/6d/e9d0f79b1545f6e902cc22773b1429de7a5efc240b891ee009
                                                                                                        Building wheel for pycairo (pyproject.toml) ... error
                                                                                                        ERROR: Command errored out with exit status 1:
                                                                                                         command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpuguwzu3u
                                                                                                             cwd: /tmp/pip-install-l4hqdegr/pycairo_f4d80b8f3e4840a3802342825adcdff5
                                                                                                        Complete output (12 lines):
                                                                                                        running bdist_wheel
                                                                                                        running build
                                                                                                        running build_py
                                                                                                        creating build
                                                                                                        creating build/lib.linux-x86_64-3.8
                                                                                                        creating build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        running build_ext
                                                                                                        'pkg-config' not found.
                                                                                                        Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']
                                                                                                        ----------------------------------------
                                                                                                        ERROR: Failed building wheel for pycairo
                                                                                                        Building wheel for progressbar (setup.py) ... done
                                                                                                        Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12074 sha256=7290ef8de5dd955bf756b90130f400dd19c2cc9ea050a5a1dce2803440f581e2
                                                                                                        Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/2c/67/ed/d84123843c937d7e7f5ba88a270d11036473144143355e2747
                                                                                                      Successfully built cairocffi pangocairocffi pangocffi progressbar
                                                                                                      Failed to build pycairo
                                                                                                      ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
                                                                                                      (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$
                                                                                                      (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip install manim_ce
                                                                                                      ERROR: Could not find a version that satisfies the requirement manim_ce (from versions: none)
                                                                                                      ERROR: No matching distribution found for manim_ce
                                                                                                      (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ manim example_scenes/basic.py -pql
                                                                                                      
                                                                                                      Command 'manim' not found, did you mean:
                                                                                                      
                                                                                                        command 'maim' from deb maim (5.5.3-1build1)
                                                                                                      
                                                                                                      Try: sudo apt install 
                                                                                                      
                                                                                                      (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ sudo apt-get install manim
                                                                                                      [sudo] password for yusifer_zendric:
                                                                                                      Reading package lists... Done
                                                                                                      Building dependency tree
                                                                                                      Reading state information... Done
                                                                                                      E: Unable to locate package manim
                                                                                                      (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip3 install manimlib
                                                                                                      Collecting manimlib
                                                                                                        Downloading manimlib-0.2.0.tar.gz (4.8 MB)
                                                                                                           |████████████████████████████████| 4.8 MB 498 kB/s
                                                                                                        Preparing metadata (setup.py) ... done
                                                                                                      Collecting Pillow
                                                                                                        Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
                                                                                                      Collecting argparse
                                                                                                        Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB)
                                                                                                      Collecting colour
                                                                                                        Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB)
                                                                                                      Collecting numpy
                                                                                                        Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
                                                                                                      Collecting opencv-python
                                                                                                        Downloading opencv_python-4.5.4.60-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB)
                                                                                                           |████████████████████████████████| 60.3 MB 520 kB/s
                                                                                                      Collecting progressbar
                                                                                                        Using cached progressbar-2.5-py3-none-any.whl
                                                                                                      Collecting pycairo
                                                                                                        Using cached pycairo-1.20.1.tar.gz (344 kB)
                                                                                                        Installing build dependencies ... done
                                                                                                        Getting requirements to build wheel ... done
                                                                                                        Preparing metadata (pyproject.toml) ... done
                                                                                                      Collecting pydub
                                                                                                        Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
                                                                                                      Collecting pygments
                                                                                                        Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB)
                                                                                                      Collecting scipy
                                                                                                        Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB)
                                                                                                      Collecting tqdm
                                                                                                        Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
                                                                                                      Building wheels for collected packages: manimlib, pycairo
                                                                                                        Building wheel for manimlib (setup.py) ... done
                                                                                                        Created wheel for manimlib: filename=manimlib-0.2.0-py3-none-any.whl size=212737 sha256=27efe2c226d80cfe5663928e980d3e5f5a164d8e9d0aacea5014d37ffdedb76a
                                                                                                        Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/87/36/c1/2db5ed5de9908034108f3c39538cd3367445d9cec01e7c8c23
                                                                                                        Building wheel for pycairo (pyproject.toml) ... error
                                                                                                        ERROR: Command errored out with exit status 1:
                                                                                                         command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5o2970su
                                                                                                             cwd: /tmp/pip-install-sxxp3lw2/pycairo_d372a62d0c6b4c4484391402d21485e1
                                                                                                        Complete output (12 lines):
                                                                                                        running bdist_wheel
                                                                                                        running build
                                                                                                        running build_py
                                                                                                        creating build
                                                                                                        creating build/lib.linux-x86_64-3.8
                                                                                                        creating build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo
                                                                                                        running build_ext
                                                                                                        'pkg-config' not found.
                                                                                                        Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']
                                                                                                        ----------------------------------------
                                                                                                        ERROR: Failed building wheel for pycairo
                                                                                                      Successfully built manimlib
                                                                                                      Failed to build pycairo
                                                                                                      ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
                                                                                                      

                                                                                                      all the libraries are installed accept the pycairo library. It's just showing this to install pyproject.toml error. Infact I have already done pip install pyproject.toml and it is installed then also it's showing the same error.

                                                                                                      ANSWER

                                                                                                      Answered 2022-Jan-28 at 02:24
                                                                                                      apt-get install sox ffmpeg libcairo2 libcairo2-dev
                                                                                                      apt-get install texlive-full
                                                                                                      pip3 install manimlib  # or pip install manimlib
                                                                                                      

                                                                                                      Then:

                                                                                                      pip3 install manimce  # or pip install manimce
                                                                                                      

                                                                                                      And everything works.

                                                                                                      Source https://stackoverflow.com/questions/70508775

                                                                                                      QUESTION

                                                                                                      Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?
                                                                                                      Asked 2021-Dec-17 at 09:08

                                                                                                      I have created a working CNN model in Keras/Tensorflow, and have successfully used the CIFAR-10 & MNIST datasets to test this model. The functioning code as seen below:

                                                                                                      import keras
                                                                                                      from keras.datasets import cifar10
                                                                                                      from keras.utils import to_categorical
                                                                                                      from keras.models import Sequential
                                                                                                      from keras.layers import Dense, Activation, Dropout, Conv2D, Flatten, MaxPooling2D
                                                                                                      from keras.layers.normalization import BatchNormalization
                                                                                                      
                                                                                                      (X_train, y_train), (X_test, y_test) = cifar10.load_data()
                                                                                                      
                                                                                                      #reshape data to fit model
                                                                                                      X_train = X_train.reshape(50000,32,32,3)
                                                                                                      X_test = X_test.reshape(10000,32,32,3)
                                                                                                      
                                                                                                      y_train = to_categorical(y_train)
                                                                                                      y_test = to_categorical(y_test)
                                                                                                      
                                                                                                      
                                                                                                      # Building the model 
                                                                                                      
                                                                                                      #1st Convolutional Layer
                                                                                                      model.add(Conv2D(filters=64, input_shape=(32,32,3), kernel_size=(11,11), strides=(4,4), padding='same'))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
                                                                                                      
                                                                                                      #2nd Convolutional Layer
                                                                                                      model.add(Conv2D(filters=224, kernel_size=(5, 5), strides=(1,1), padding='same'))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
                                                                                                      
                                                                                                      #3rd Convolutional Layer
                                                                                                      model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      
                                                                                                      #4th Convolutional Layer
                                                                                                      model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      
                                                                                                      #5th Convolutional Layer
                                                                                                      model.add(Conv2D(filters=160, kernel_size=(3,3), strides=(1,1), padding='same'))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
                                                                                                      
                                                                                                      model.add(Flatten())
                                                                                                      
                                                                                                      # 1st Fully Connected Layer
                                                                                                      model.add(Dense(4096, input_shape=(32,32,3,)))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      # Add Dropout to prevent overfitting
                                                                                                      model.add(Dropout(0.4))
                                                                                                      
                                                                                                      #2nd Fully Connected Layer
                                                                                                      model.add(Dense(4096))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      #Add Dropout
                                                                                                      model.add(Dropout(0.4))
                                                                                                      
                                                                                                      #3rd Fully Connected Layer
                                                                                                      model.add(Dense(1000))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('relu'))
                                                                                                      #Add Dropout
                                                                                                      model.add(Dropout(0.4))
                                                                                                      
                                                                                                      #Output Layer
                                                                                                      model.add(Dense(10))
                                                                                                      model.add(BatchNormalization())
                                                                                                      model.add(Activation('softmax'))
                                                                                                      
                                                                                                      
                                                                                                      #compile model using accuracy to measure model performance
                                                                                                      opt = keras.optimizers.Adam(learning_rate = 0.0001)
                                                                                                      model.compile(optimizer=opt, loss='categorical_crossentropy', 
                                                                                                                    metrics=['accuracy'])
                                                                                                      
                                                                                                      
                                                                                                      #train the model
                                                                                                      model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
                                                                                                      

                                                                                                      From this point after utilising the aforementioned datasets, I wanted to go one further and use a dataset with more channels than a greyscale or rgb presented, hence the inclusion of a hyperspectral dataset. When looking for a hyperspectral dataset I came across this one.

                                                                                                      The issue at this stage was realising that this hyperspectral dataset was one image, with each value in the ground truth relating to each pixel. At this stage I reformatted the data from this into a collection of hyperspectral data/pixels.

                                                                                                      Code reformatting corrected dataset for x_train & x_test:

                                                                                                      import keras
                                                                                                      import scipy
                                                                                                      import numpy as np
                                                                                                      import matplotlib.pyplot as plt
                                                                                                      from keras.utils import to_categorical
                                                                                                      from scipy import io
                                                                                                      
                                                                                                      mydict = scipy.io.loadmat('Indian_pines_corrected.mat')
                                                                                                      dataset = np.array(mydict.get('indian_pines_corrected'))
                                                                                                      
                                                                                                      
                                                                                                      #This is creating the split between x_train and x_test from the original dataset 
                                                                                                      # x_train after this code runs will have a shape of (121, 145, 200) 
                                                                                                      # x_test after this code runs will have a shape of (24, 145, 200)
                                                                                                      x_train = np.zeros((121,145,200), dtype=np.int)
                                                                                                      x_test = np.zeros((24,145,200), dtype=np.int)    
                                                                                                      
                                                                                                      xtemp = np.array_split(dataset, [121])
                                                                                                      x_train = np.array(xtemp[0])
                                                                                                      x_test = np.array(xtemp[1])
                                                                                                      
                                                                                                      # x_train will have a shape of (17545, 200) 
                                                                                                      # x_test will have a shape of (3480, 200)
                                                                                                      x_train = x_train.reshape(-1, x_train.shape[-1])
                                                                                                      x_test = x_test.reshape(-1, x_test.shape[-1])
                                                                                                      

                                                                                                      Code reformatting ground truth dataset for Y_train & Y_test:

                                                                                                      truthDataset = scipy.io.loadmat('Indian_pines_gt.mat')
                                                                                                      gTruth = truthDataset.get('indian_pines_gt')
                                                                                                      
                                                                                                      #This is creating the split between Y_train and Y_test from the original dataset 
                                                                                                      # Y_train after this code runs will have a shape of (121, 145) 
                                                                                                      # Y_test after this code runs will have a shape of (24, 145)
                                                                                                      
                                                                                                      Y_train = np.zeros((121,145), dtype=np.int)
                                                                                                      Y_test = np.zeros((24,145), dtype=np.int)    
                                                                                                      
                                                                                                      ytemp = np.array_split(gTruth, [121])
                                                                                                      Y_train = np.array(ytemp[0])
                                                                                                      Y_test = np.array(ytemp[1])
                                                                                                      
                                                                                                      # Y_train will have a shape of (17545) 
                                                                                                      # Y_test will have a shape of (3480)
                                                                                                      Y_train = Y_train.reshape(-1)
                                                                                                      Y_test = Y_test.reshape(-1)
                                                                                                      
                                                                                                      
                                                                                                      #17 binary categories ranging from 0-16
                                                                                                      
                                                                                                      #Y_train one-hot encode target column
                                                                                                      Y_train = to_categorical(Y_train)
                                                                                                      
                                                                                                      #Y_test one-hot encode target column
                                                                                                      Y_test = to_categorical(Y_test, num_classes = 17)
                                                                                                      

                                                                                                      My thought process was that, despite the initial image being broken down into 1x1 patches, the large number of channels each patch possessed with their respective values would aid in categorisation of the dataset.

                                                                                                      Essentially I'd want to input this reformatted data into my model (seen within the first code fragment in this post), however I'm uncertain if I am taking the wrong approach to this due to my inexperience with this area of expertise. I was expecting to input a shape of (1,1,200), i.e the shape of x_train & x_test would be (17545,1,1,200) & (3480,1,1,200) respectively.

                                                                                                      ANSWER

                                                                                                      Answered 2021-Dec-16 at 10:18

                                                                                                      If the hyperspectral dataset is given to you as a large image with many channels, I suppose that the classification of each pixel should depend on the pixels around it (otherwise I would not format the data as an image, i.e. without grid structure). Given this assumption, breaking up the input picture into 1x1 parts is not a good idea as you are loosing the grid structure.

                                                                                                      I further suppose that the order of the channels is arbitrary, which implies that convolution over the channels is probably not meaningful (which you however did not plan to do anyways).

                                                                                                      Instead of reformatting the data the way you did, you may want to create a model that takes an image as input and also outputs an "image" containing the classifications for each pixel. I.e. if you have 10 classes and take a (145, 145, 200) image as input, your model would output a (145, 145, 10) image. In that architecture you would not have any fully-connected layers. Your output layer would also be a convolutional layer.

                                                                                                      That however means that you will not be able to keep your current architecture. That is because the tasks for MNIST/CIFAR10 and your hyperspectral dataset are not the same. For MNIST/CIFAR10 you want to classify an image in it's entirety, while for the other dataset you want to assign a class to each pixel (while most likely also using the pixels around each pixel).

                                                                                                      Some further ideas:

                                                                                                      • If you want to turn the pixel classification task on the hyperspectral dataset into a classification task for an entire image, maybe you can reformulate that task as "classifying a hyperspectral image as the class of it's center (or top-left, or bottom-right, or (21th, 104th), or whatever) pixel". To obtain the data from your single hyperspectral image, for each pixel, I would shift the image such that the target pixel is at the desired location (e.g. the center). All pixels that "fall off" the border could be inserted at the other side of the image.
                                                                                                      • If you want to stick with a pixel classification task but need more data, maybe split up the single hyperspectral image you have into many smaller images (e.g. 10x10x200). You may even want to use images of many different sizes. If you model only has convolution and pooling layers and you make sure to maintain the sizes of the image, that should work out.

                                                                                                      Source https://stackoverflow.com/questions/70226626

                                                                                                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                                                                                                      Vulnerabilities

                                                                                                      No vulnerabilities reported

                                                                                                      Install scipy

                                                                                                      You can install using 'pip install scipy' or download it from GitHub, PyPI.
                                                                                                      You can use scipy like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

                                                                                                      Support

                                                                                                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
                                                                                                      Find more information at:
                                                                                                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                                      Find more libraries
                                                                                                      Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                                      Save this library and start creating your kit
                                                                                                      Install
                                                                                                    • PyPI

                                                                                                      pip install scipy

                                                                                                    • CLONE
                                                                                                    • HTTPS

                                                                                                      https://github.com/scipy/scipy.git

                                                                                                    • CLI

                                                                                                      gh repo clone scipy/scipy

                                                                                                    • sshUrl

                                                                                                      git@github.com:scipy/scipy.git

                                                                                                    • Share this Page

                                                                                                      share link

                                                                                                      Consider Popular Python Libraries

                                                                                                      public-apis

                                                                                                      by public-apis

                                                                                                      system-design-primer

                                                                                                      by donnemartin

                                                                                                      Python

                                                                                                      by TheAlgorithms

                                                                                                      Python-100-Days

                                                                                                      by jackfrued

                                                                                                      youtube-dl

                                                                                                      by ytdl-org

                                                                                                      Try Top Libraries by scipy

                                                                                                      scipy-cookbook

                                                                                                      by scipyJupyter Notebook

                                                                                                      scipy.org

                                                                                                      by scipyHTML

                                                                                                      weave

                                                                                                      by scipyC++

                                                                                                      SciPyCentral

                                                                                                      by scipyJavaScript

                                                                                                      Compare Python Libraries with Highest Support

                                                                                                      core

                                                                                                      by home-assistant

                                                                                                      youtube-dl

                                                                                                      by ytdl-org

                                                                                                      scikit-learn

                                                                                                      by scikit-learn

                                                                                                      models

                                                                                                      by tensorflow

                                                                                                      fastapi

                                                                                                      by tiangolo

                                                                                                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                                      Find more libraries
                                                                                                      Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                                      Save this library and start creating your kit