kandi background
Explore Kits

intervaltree | Interval Tree implementation in Java | Dataset library

 by   kevinjdolan Java Version: Current License: No License

 by   kevinjdolan Java Version: Current License: No License

Download this library from

kandi X-RAY | intervaltree Summary

intervaltree is a Java library typically used in Artificial Intelligence, Dataset applications. intervaltree has no bugs, it has no vulnerabilities and it has low support. However intervaltree build file is not available. You can download it from GitHub.
[unmaintained] Interval Tree implementation in Java
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • intervaltree has a low active ecosystem.
  • It has 16 star(s) with 17 fork(s). There are 2 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 1 open issues and 0 have been closed. On average issues are closed in 1369 days. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of intervaltree is current.
intervaltree Support
Best in #Dataset
Average in #Dataset
intervaltree Support
Best in #Dataset
Average in #Dataset

quality kandi Quality

  • intervaltree has 0 bugs and 0 code smells.
intervaltree Quality
Best in #Dataset
Average in #Dataset
intervaltree Quality
Best in #Dataset
Average in #Dataset

securitySecurity

  • intervaltree has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • intervaltree code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
intervaltree Security
Best in #Dataset
Average in #Dataset
intervaltree Security
Best in #Dataset
Average in #Dataset

license License

  • intervaltree does not have a standard license declared.
  • Check the repository for any license declaration and review the terms closely.
  • Without a license, all rights are reserved, and you cannot use the library in your applications.
intervaltree License
Best in #Dataset
Average in #Dataset
intervaltree License
Best in #Dataset
Average in #Dataset

buildReuse

  • intervaltree releases are not available. You will need to build from source code and install.
  • intervaltree has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
intervaltree Reuse
Best in #Dataset
Average in #Dataset
intervaltree Reuse
Best in #Dataset
Average in #Dataset
Top functions reviewed by kandi - BETA

kandi has reviewed intervaltree and discovered the below as its top functions. This is intended to give you an instant insight into intervaltree implemented functionality, and help decide if they suit your requirements.

  • Return all intervals associated with the given time .
    • Queries the intervals for the given interval .
      • Find intervals for a given time .
        • Returns a string representation of the center .
          • Compares this interval to another interval .
            • Get the median value .
              • Sets the end time .
                • Check if this interval intersects another .
                  • Check if the given time interval contains the given time .

                    Get all kandi verified functions for this library.

                    Get all kandi verified functions for this library.

                    intervaltree Key Features

                    [unmaintained] Interval Tree implementation in Java

                    Original release notes from 2010

                    copy iconCopydownload iconDownload
                    IntervalTree<Integer> it = new IntervalTree<Integer>;();
                    
                    it.addInterval(0L,10L,1);
                    it.addInterval(20L,30L,2);
                    it.addInterval(15L,17L,3);
                    it.addInterval(25L,35L,4);
                    
                    List result1 = it.get(5L);
                    List result2 = it.get(10L);
                    List result3 = it.get(29L);
                    List result4 = it.get(5L,15L);
                    
                    System.out.println("Intervals that contain 5L:");
                    for(int r : result1)
                        System.out.println(r);
                    
                    System.out.println("Intervals that contain 10L:");
                    for(int r : result2)
                        System.out.println(r);
                    
                    System.out.println("Intervals that contain 29L:");
                    for(int r : result3)
                        System.out.println(r);
                    
                    System.out.println("Intervals that intersect (5L,15L):");
                    for(int r : result4)
                        System.out.println(r);

                    No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib'

                    copy iconCopydownload iconDownload
                    python is /opt/anaconda3/bin/python
                    python is /usr/local/bin/python
                    python is /usr/bin/python
                    

                    UnsatisfiableError on importing environment pywin32==300 (Requested package -> Available versions)

                    copy iconCopydownload iconDownload
                    name: restoredEnv
                    channels:
                      - anaconda
                      - conda-forge
                      - defaults
                    dependencies:
                      - _anaconda_depends=2020.07
                      - _ipyw_jlab_nb_ext_conf=0.1.0
                      - _tflow_select=2.3.0=eigen
                      - aiohttp=3.7.4
                      - alabaster=0.7.12
                      - anaconda=custom
                      - anaconda-client=1.7.2
                      - anaconda-navigator=2.0.3
                      - anaconda-project=0.9.1
                      - anyio=2.2.0
                      - appdirs=1.4.4
                      - argh=0.26.2
                      - argon2-cffi=20.1.0
                      - arrow=1.0.3
                      - asn1crypto=1.4.0
                      - astor=0.8.1
                      - astroid=2.5.2
                      - astropy=4.2.1
                      - astunparse=1.6.3
                      - async-timeout=3.0.1
                      - async_generator=1.10
                      - atomicwrites=1.4.0
                      - attrs=20.3.0
                      - autopep8=1.5.6
                      - babel=2.9.0
                      - backcall=0.2.0
                      - backports=1.0
                      - backports.functools_lru_cache=1.6.3
                      - backports.shutil_get_terminal_size=1.0.0
                      - backports.tempfile=1.0
                      - backports.weakref=1.0.post1
                      - bcrypt=3.2.0
                      - beautifulsoup4=4.9.3
                      - binaryornot=0.4.4
                      - bitarray=1.9.1
                      - bkcharts=0.2
                      - black=20.8b1
                      - blas=1.0=mkl
                      - bleach=3.3.0
                      - blinker=1.4
                      - blosc=1.21.0
                      - bokeh=2.3.0
                      - boto=2.49.0
                      - bottleneck=1.3.2
                      - brotli=1.0.9
                      - brotlipy=0.7.0
                      - bzip2=1.0.8
                      - ca-certificates=2020.10.14=0
                      - cached-property=1.5.2
                      - cached_property=1.5.2
                      - certifi=2020.6.20
                      - cffi=1.14.5
                      - chardet=4.0.0
                      - charls=2.2.0
                      - click=7.1.2
                      - cloudpickle=1.6.0
                      - clyent=1.2.2
                      - colorama=0.4.4
                      - comtypes=1.1.9
                      - conda=4.10.1
                      - conda-build=3.18.11
                      - conda-content-trust=0.1.1
                      - conda-env=2.6.0=1
                      - conda-package-handling=1.7.2
                      - conda-repo-cli=1.0.4
                      - conda-token=0.3.0
                      - conda-verify=3.4.2
                      - console_shortcut=0.1.1=4
                      - contextlib2=0.6.0.post1
                      - cookiecutter=1.7.2
                      - cryptography=3.4.7
                      - curl=7.76.0
                      - cycler=0.10.0
                      - cython=0.29.22
                      - cytoolz=0.11.0
                      - dask=2021.4.0
                      - dask-core=2021.4.0
                      - dataclasses=0.8
                      - decorator=4.4.2
                      - defusedxml=0.7.1
                      - diff-match-patch=20200713
                      - distributed=2021.4.0
                      - docutils=0.17
                      - entrypoints=0.3
                      - et_xmlfile=1.0.1
                      - fastcache=1.1.0
                      - filelock=3.0.12
                      - flake8=3.9.0
                      - flask=1.1.2
                      - freetype=2.10.4
                      - fsspec=0.9.0
                      - future=0.18.2
                      - get_terminal_size=1.0.0
                      - gevent=21.1.2
                      - giflib=5.2.1
                      - glew=2.1.0
                      - glob2=0.7
                      - gmpy2=2.1.0b1
                      - google-pasta=0.2.0
                      - greenlet=1.0.0
                      - h5py=2.10.0
                      - hdf5=1.10.6
                      - heapdict=1.0.1
                      - html5lib=1.1
                      - icc_rt=2019.0.0
                      - icu=68.1
                      - idna=2.10
                      - imagecodecs=2021.3.31
                      - imageio=2.9.0
                      - imagesize=1.2.0
                      - importlib-metadata=3.10.0
                      - importlib_metadata=3.10.0
                      - inflection=0.5.1
                      - iniconfig=1.1.1
                      - intel-openmp=2021.2.0
                      - intervaltree=3.0.2
                      - ipykernel=5.5.3
                      - ipython=7.22.0
                      - ipython_genutils=0.2.0
                      - ipywidgets=7.6.3
                      - isort=5.8.0
                      - itsdangerous=1.1.0
                      - jdcal=1.4.1
                      - jedi=0.17.2
                      - jinja2=2.11.3
                      - jinja2-time=0.2.0
                      - joblib=1.0.1
                      - jpeg=9d
                      - json5=0.9.5
                      - jsonschema=3.2.0
                      - jupyter=1.0.0
                      - jupyter-packaging=0.7.12
                      - jupyter_client=6.1.12
                      - jupyter_console=6.4.0
                      - jupyter_core=4.7.1
                      - jupyter_server=1.5.1
                      - jupyterlab=3.0.12
                      - jupyterlab_pygments=0.1.2
                      - jupyterlab_server=2.4.0
                      - jupyterlab_widgets=1.0.0
                      - jxrlib=1.1
                      - keras-applications=1.0.8
                      - keras-preprocessing=1.1.2
                      - keyring=23.0.1
                      - kivy=2.0.0
                      - kiwisolver=1.3.1
                      - krb5=1.17.2
                      - lazy-object-proxy=1.6.0
                      - lcms2=2.12
                      - lerc=2.2.1
                      - libaec=1.0.4
                      - libarchive=3.5.1
                      - libblas=3.9.0=8_mkl
                      - libcblas=3.9.0=8_mkl
                      - libclang=11.1.0
                      - libcurl=7.76.0
                      - libdeflate=1.7
                      - libiconv=1.16
                      - liblapack=3.9.0=8_mkl
                      - liblief=0.10.1
                      - libllvm9=9.0.1
                      - libpng=1.6.37
                      - libprotobuf=3.16.0
                      - libsodium=1.0.18
                      - libspatialindex=1.9.3
                      - libssh2=1.9.0
                      - libtiff=4.2.0
                      - libuv=1.39.0
                      - libwebp-base=1.2.0
                      - libxml2=2.9.10
                      - libxslt=1.1.33
                      - libzopfli=1.0.3
                      - llvmlite=0.36.0
                      - locket=0.2.0
                      - lxml=4.6.3
                      - lz4-c=1.9.3
                      - lzo=2.10
                      - m2w64-gcc-libgfortran=5.3.0=6
                      - m2w64-gcc-libs=5.3.0=7
                      - m2w64-gcc-libs-core=5.3.0=7
                      - m2w64-gmp=6.1.0=2
                      - m2w64-libwinpthread-git=5.0.0.4634.697f757=2
                      - markdown=3.3.4
                      - markupsafe=1.1.1
                      - matplotlib=3.4.1
                      - matplotlib-base=3.4.1
                      - mccabe=0.6.1
                      - menuinst=1.4.16
                      - mistune=0.8.4
                      - mkl=2020.4
                      - mkl-service=2.3.0
                      - mkl_fft=1.3.0
                      - mkl_random=1.2.0
                      - mock=4.0.3
                      - more-itertools=8.7.0
                      - mpc=1.1.0
                      - mpfr=4.0.2
                      - mpir=3.0.0
                      - mpmath=1.2.1
                      - msgpack-python=1.0.2
                      - msys2-conda-epoch=20160418=1
                      - multidict=5.1.0
                      - multipledispatch=0.6.0
                      - mypy_extensions=0.4.3
                      - navigator-updater=0.2.1
                      - nbclassic=0.2.6
                      - nbclient=0.5.3
                      - nbconvert=6.0.7
                      - nbformat=5.1.3
                      - nest-asyncio=1.5.1
                      - networkx=2.5.1
                      - nltk=3.6
                      - nose=1.3.7
                      - notebook=6.3.0
                      - numba=0.53.1
                      - numexpr=2.7.3
                      - numpy=1.20.2
                      - numpy-base=1.18.5
                      - numpydoc=1.1.0
                      - olefile=0.46
                      - openjpeg=2.4.0
                      - openpyxl=3.0.7
                      - openssl=1.1.1k
                      - opt_einsum=3.3.0
                      - packaging=20.9
                      - pandas=1.2.3
                      - pandoc=2.13
                      - pandocfilters=1.4.2
                      - paramiko=2.7.2
                      - parso=0.7.0
                      - partd=1.1.0
                      - path=15.1.2
                      - path.py=12.5.0=0
                      - pathlib2=2.3.5
                      - pathspec=0.8.1
                      - pathtools=0.1.2
                      - patsy=0.5.1
                      - pep8=1.7.1
                      - pexpect=4.8.0
                      - pickleshare=0.7.5
                      - pillow=8.1.2
                      - pip=21.0.1
                      - pkginfo=1.7.0
                      - pluggy=0.13.1
                      - ply=3.11
                      - pooch=1.3.0
                      - powershell_shortcut=0.0.1=3
                      - poyo=0.5.0
                      - prometheus_client=0.10.0
                      - prompt-toolkit=3.0.18
                      - prompt_toolkit=3.0.18
                      - psutil=5.8.0
                      - ptyprocess=0.7.0
                      - py=1.10.0
                      - py-lief=0.10.1
                      - pyasn1=0.4.8
                      - pycodestyle=2.6.0
                      - pycosat=0.6.3
                      - pycparser=2.20
                      - pycurl=7.43.0.6
                      - pydocstyle=6.0.0
                      - pyerfa=1.7.2
                      - pyfirmata=1.1.0
                      - pyflakes=2.2.0
                      - pygments=2.8.1
                      - pyjwt=2.1.0
                      - pylint=2.7.2
                      - pyls-black=0.4.6
                      - pyls-spyder=0.3.2
                      - pynacl=1.4.0
                      - pyodbc=4.0.30
                      - pyopenssl=20.0.1
                      - pyparsing=2.4.7
                      - pyqt=5.12.3
                      - pyqt-impl=5.12.3
                      - pyqt5-sip=4.19.18
                      - pyqtchart=5.12
                      - pyqtwebengine=5.12.1
                      - pyreadline=2.1
                      - pyrsistent=0.17.3
                      - pyserial=3.4
                      - pysocks=1.7.1
                      - pytables=3.6.1
                      - pytest=6.2.3
                      - python=3.8.3
                      - python-dateutil=2.8.1
                      - python-jsonrpc-server=0.4.0
                      - python-language-server=0.36.2
                      - python-libarchive-c=2.9
                      - python-slugify=4.0.1
                      - python_abi=3.8=1_cp38
                      - pytz=2021.1
                      - pywavelets=1.1.1
                      - pywin32=300
                      - pywin32-ctypes=0.2.0
                      - pywinpty=0.5.7
                      - pyyaml=5.4.1
                      - pyzmq=22.0.3
                      - qdarkstyle=3.0.2
                      - qstylizer=0.1.10
                      - qt=5.12.9
                      - qtawesome=1.0.2
                      - qtconsole=5.0.3
                      - qtpy=1.9.0
                      - regex=2021.4.4
                      - requests=2.25.1
                      - requests-oauthlib=1.3.0
                      - rope=0.18.0
                      - rsa=4.7.2
                      - rtree=0.9.7
                      - ruamel_yaml=0.15.80
                      - scikit-image=0.18.1
                      - scipy=1.6.2
                      - sdl2=2.0.12
                      - sdl2_image=2.0.5
                      - sdl2_mixer=2.0.4
                      - sdl2_ttf=2.0.15
                      - seaborn=0.11.1
                      - seaborn-base=0.11.1
                      - send2trash=1.5.0
                      - setuptools=49.6.0
                      - simplegeneric=0.8.1
                      - singledispatch=3.6.1
                      - sip=4.19.25
                      - six=1.15.0
                      - smpeg2=2.0.0
                      - snappy=1.1.8
                      - sniffio=1.2.0
                      - snowballstemmer=2.1.0
                      - sortedcollections=2.1.0
                      - sortedcontainers=2.3.0
                      - soupsieve=2.0.1
                      - sphinx=3.5.3
                      - sphinxcontrib=1.0
                      - sphinxcontrib-applehelp=1.0.2
                      - sphinxcontrib-devhelp=1.0.2
                      - sphinxcontrib-htmlhelp=1.0.3
                      - sphinxcontrib-jsmath=1.0.1
                      - sphinxcontrib-qthelp=1.0.3
                      - sphinxcontrib-serializinghtml=1.1.4
                      - sphinxcontrib-websupport=1.2.4
                      - spyder=5.0.0
                      - spyder-kernels=2.0.1
                      - sqlalchemy=1.4.6
                      - sqlite=3.35.4
                      - statsmodels=0.12.2
                      - sympy=1.7.1
                      - tbb=2020.2
                      - tblib=1.7.0
                      - tensorboard=2.4.1
                      - tensorboard-plugin-wit=1.8.0
                      - tensorflow-base=2.3.0
                      - tensorflow-estimator=2.4.0
                      - terminado=0.9.4
                      - testpath=0.4.4
                      - text-unidecode=1.3
                      - textdistance=4.2.1
                      - threadpoolctl=2.1.0
                      - three-merge=0.1.1
                      - tifffile=2021.3.31
                      - tinycss=0.4
                      - tk=8.6.10
                      - toml=0.10.2
                      - toolz=0.11.1
                      - tornado=6.1
                      - tqdm=4.60.0
                      - traitlets=5.0.5
                      - typed-ast=1.4.2
                      - typing-extensions=3.7.4.3=0
                      - typing_extensions=3.7.4.3
                      - ujson=4.0.2
                      - unicodecsv=0.14.1
                      - unidecode=1.2.0
                      - urllib3=1.26.4
                      - vc=14.2
                      - vs2015_runtime=14.28.29325
                      - watchdog=1.0.2
                      - wcwidth=0.2.5
                      - webencodings=0.5.1
                      - werkzeug=1.0.1
                      - wheel=0.36.2
                      - whichcraft=0.6.1
                      - widgetsnbextension=3.5.1
                      - win_inet_pton=1.1.0
                      - win_unicode_console=0.5
                      - wincertstore=0.2
                      - winpty=0.4.3=4
                      - wrapt=1.12.1
                      - xlrd=2.0.1
                      - xlsxwriter=1.3.8
                      - xlwings=0.23.0
                      - xlwt=1.3.0
                      - xmltodict=0.12.0
                      - xz=5.2.5
                      - yaml=0.2.5
                      - yapf=0.30.0
                      - yarl=1.6.3
                      - zeromq=4.3.4
                      - zfp=0.5.5
                      - zict=2.0.0
                      - zipp=3.4.1
                      - zlib=1.2.11
                      - zope=1.0
                      - zope.event=4.5.0
                      - zope.interface=5.3.0
                      - zstd=1.4.9
                      - pip:
                        - absl-py==0.11.0
                        - bs4==0.0.1
                        - cachetools==4.2.1
                        - cssselect==1.1.0
                        - fake-useragent==0.1.11
                        - feedparser==6.0.2
                        - flatbuffers==1.12
                        - gast==0.3.3
                        - google-auth==1.27.1
                        - google-auth-oauthlib==0.4.3
                        - grpcio==1.32.0
                        - oauthlib==3.1.0
                        - opencv-python==4.5.1.48
                        - parse==1.19.0
                        - protobuf==3.15.5
                        - pyarduino==0.2.2
                        - pyasn1-modules==0.2.8
                        - pyee==8.1.0
                        - pymysql==0.10.1
                        - pyppeteer==0.2.5
                        - pyquery==1.4.3
                        - requests-html==0.10.0
                        - scikit-learn==0.22.2.post1
                        - sgmllib3k==1.0.0
                        - tensorflow==2.4.1
                        - termcolor==1.1.0
                        - w3lib==1.22.0
                        - websockets==8.1
                        - yahoo-fin==0.8.8
                    

                    Evaluating list similarities

                    copy iconCopydownload iconDownload
                    import pandas as pd
                    
                    df = pd.DataFrame(
                        {
                            "user_id": {0: "u1", 1: "u2", 2: "u3", 3: "u4", 4: "u5"},
                            "actual": {
                                0: ["a", "b", "c"],
                                1: ["a", "b", "d"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["b", "e", "f"],
                            },
                            "predicted": {
                                0: ["a", "b", "d"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "e"],
                                3: ["a", "e", "f"],
                                4: ["a", "b", "e"],
                            },
                            "popular": {
                                0: ["c", "e", "f"],
                                1: ["c", "e", "f"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["c", "e", "f"],
                            },
                            "random": {
                                0: ["d", "e", "f"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "f"],
                                3: ["a", "d", "f"],
                                4: ["a", "c", "e"],
                            },
                        }
                    )
                    
                    # Convert lists into sets
                    df = df.applymap(lambda x: set(x) if isinstance(x, list) else x)
                    
                    # Iterate to create new columns with percentages
                    for i in range(df.shape[0]):
                        for col in ["predicted", "popular", "random"]:
                            df.loc[i, f"{col}_pct"] = (
                                len(df.loc[i, "actual"] & df.loc[i, col]) / len(df.loc[i, "actual"]) * 100
                            )
                    
                    # Cleanup
                    df = df[["user_id", "predicted_pct", "popular_pct", "random_pct"]]
                    
                    print(df)
                    # Outputs
                      user_id  predicted_pct  popular_pct  random_pct
                    0      u1      66.666667    33.333333    0.000000
                    1      u2      66.666667     0.000000   66.666667
                    2      u3      66.666667   100.000000   66.666667
                    3      u4      66.666667   100.000000   33.333333
                    4      u5      66.666667    66.666667   33.333333
                    
                    import pandas as pd
                    
                    df = pd.DataFrame(
                        {
                            "user_id": {0: "u1", 1: "u2", 2: "u3", 3: "u4", 4: "u5"},
                            "actual": {
                                0: ["a", "b", "c"],
                                1: ["a", "b", "d"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["b", "e", "f"],
                            },
                            "predicted": {
                                0: ["a", "b", "d"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "e"],
                                3: ["a", "e", "f"],
                                4: ["a", "b", "e"],
                            },
                            "popular": {
                                0: ["c", "e", "f"],
                                1: ["c", "e", "f"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["c", "e", "f"],
                            },
                            "random": {
                                0: ["d", "e", "f"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "f"],
                                3: ["a", "d", "f"],
                                4: ["a", "c", "e"],
                            },
                        }
                    )
                    
                    # Convert lists into sets
                    df = df.applymap(lambda x: set(x) if isinstance(x, list) else x)
                    
                    # Iterate to create new columns with percentages
                    for i in range(df.shape[0]):
                        for col in ["predicted", "popular", "random"]:
                            df.loc[i, f"{col}_pct"] = (
                                len(df.loc[i, "actual"] & df.loc[i, col]) / len(df.loc[i, "actual"]) * 100
                            )
                    
                    # Cleanup
                    df = df[["user_id", "predicted_pct", "popular_pct", "random_pct"]]
                    
                    print(df)
                    # Outputs
                      user_id  predicted_pct  popular_pct  random_pct
                    0      u1      66.666667    33.333333    0.000000
                    1      u2      66.666667     0.000000   66.666667
                    2      u3      66.666667   100.000000   66.666667
                    3      u4      66.666667   100.000000   33.333333
                    4      u5      66.666667    66.666667   33.333333
                    
                    import pandas as pd
                    
                    df = pd.DataFrame(
                        {
                            "user_id": {0: "u1", 1: "u2", 2: "u3", 3: "u4", 4: "u5"},
                            "actual": {
                                0: ["a", "b", "c"],
                                1: ["a", "b", "d"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["b", "e", "f"],
                            },
                            "predicted": {
                                0: ["a", "b", "d"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "e"],
                                3: ["a", "e", "f"],
                                4: ["a", "b", "e"],
                            },
                            "popular": {
                                0: ["c", "e", "f"],
                                1: ["c", "e", "f"],
                                2: ["c", "e", "f"],
                                3: ["c", "e", "f"],
                                4: ["c", "e", "f"],
                            },
                            "random": {
                                0: ["d", "e", "f"],
                                1: ["a", "b", "c"],
                                2: ["a", "c", "f"],
                                3: ["a", "d", "f"],
                                4: ["a", "c", "e"],
                            },
                        }
                    )
                    
                    # Convert lists into sets
                    df = df.applymap(lambda x: set(x) if isinstance(x, list) else x)
                    
                    # Iterate to create new columns with percentages
                    for i in range(df.shape[0]):
                        for col in ["predicted", "popular", "random"]:
                            df.loc[i, f"{col}_pct"] = (
                                len(df.loc[i, "actual"] & df.loc[i, col]) / len(df.loc[i, "actual"]) * 100
                            )
                    
                    # Cleanup
                    df = df[["user_id", "predicted_pct", "popular_pct", "random_pct"]]
                    
                    print(df)
                    # Outputs
                      user_id  predicted_pct  popular_pct  random_pct
                    0      u1      66.666667    33.333333    0.000000
                    1      u2      66.666667     0.000000   66.666667
                    2      u3      66.666667   100.000000   66.666667
                    3      u4      66.666667   100.000000   33.333333
                    4      u5      66.666667    66.666667   33.333333
                    

                    Remove rows from dataframe if it has partial match with other rows for specific columns

                    copy iconCopydownload iconDownload
                    df = pd.DataFrame({'id': [1, 1, 1], 'start': [11, 20, 16], 'end': [18, 35, 17]})
                    
                    # First we construct a range of numbers from the start and end index
                    df.loc[:, 'range'] = df.apply(lambda x: list(range(x['start'], x['end'])), axis=1)
                    
                    # Next, we "cumulate" these ranges and measure the number of unique elements in the cumulative range at each row 
                    df['range_size'] = df['range'].cumsum().apply(lambda x: len(set(x)))
                    
                    # Finally we check if every row adds anything to the cumulative range - if a new row adds nothing, then we can drop that row
                    df['range_size_shifted'] = df['range'].cumsum().apply(lambda x: len(set(x))).shift(1)
                    df['drop'] = df.apply(lambda x: False if pd.isna(x['range_size_shifted']) else not int(x['range_size'] - x['range_size_shifted']), axis=1)
                    
                    print(df)
                    #   id  start  end   drop
                    #0   1     11   18  False
                    #1   1     20   35  False
                    #2   1     16   17   True
                    
                    for key, group in df.groupby('id'):
                        group.loc[:, 'range'] = group.apply(lambda x: list(range(x['start'], x['end'])), axis=1)
                        group['range_size'] = group['range'].cumsum().apply(lambda x: len(set(x)))
                        group['range_size_shifted'] = group['range'].cumsum().apply(lambda x: len(set(x))).shift(1)
                        group['drop'] = group.apply(lambda x: False if pd.isna(x['range_size_shifted']) else not int(x['range_size'] - x['range_size_shifted']), axis=1)
                        print(group)
                    
                    df = pd.DataFrame({'id': [1, 1, 1], 'start': [11, 20, 16], 'end': [18, 35, 17]})
                    
                    # First we construct a range of numbers from the start and end index
                    df.loc[:, 'range'] = df.apply(lambda x: list(range(x['start'], x['end'])), axis=1)
                    
                    # Next, we "cumulate" these ranges and measure the number of unique elements in the cumulative range at each row 
                    df['range_size'] = df['range'].cumsum().apply(lambda x: len(set(x)))
                    
                    # Finally we check if every row adds anything to the cumulative range - if a new row adds nothing, then we can drop that row
                    df['range_size_shifted'] = df['range'].cumsum().apply(lambda x: len(set(x))).shift(1)
                    df['drop'] = df.apply(lambda x: False if pd.isna(x['range_size_shifted']) else not int(x['range_size'] - x['range_size_shifted']), axis=1)
                    
                    print(df)
                    #   id  start  end   drop
                    #0   1     11   18  False
                    #1   1     20   35  False
                    #2   1     16   17   True
                    
                    for key, group in df.groupby('id'):
                        group.loc[:, 'range'] = group.apply(lambda x: list(range(x['start'], x['end'])), axis=1)
                        group['range_size'] = group['range'].cumsum().apply(lambda x: len(set(x)))
                        group['range_size_shifted'] = group['range'].cumsum().apply(lambda x: len(set(x))).shift(1)
                        group['drop'] = group.apply(lambda x: False if pd.isna(x['range_size_shifted']) else not int(x['range_size'] - x['range_size_shifted']), axis=1)
                        print(group)
                    
                    from intervaltree import Interval, IntervalTree
                    def drop_subspan_duplicates(df):
                    
                       idx1 = pd.arrays.IntervalArray.from_arrays(
                                              df['start'], 
                                              df['end'], 
                                              closed='both')
                    
                      df['wrd_id'] = df.apply(lambda x : df.index[idx1.overlaps(pd.Interval(x['start'], x['end'], closed='both'))][0],axis=1)
                      df= df.drop_duplicates(['wrd_id'],keep='first')
                      df.drop(['wrd_id'],axis=1,inplace=True)
                      return df
                    
                    output = data.groupby('id').apply(drop_subspan_duplicates)
                    

                    Multipoint(df['geometry']) key error from dataframe but key exist. KeyError: 13 geopandas

                    copy iconCopydownload iconDownload
                    # https://www.kaggle.com/new-york-state/nys-nyc-transit-subway-entrance-and-exit-data
                    import kaggle.cli
                    import sys, requests, urllib
                    import pandas as pd
                    from pathlib import Path
                    from zipfile import ZipFile
                    
                    # fmt: off
                    # download data set
                    url = "https://www.kaggle.com/new-york-state/nys-nyc-transit-subway-entrance-and-exit-data"
                    sys.argv = [sys.argv[0]] + f"datasets download {urllib.parse.urlparse(url).path[1:]}".split(" ")
                    kaggle.cli.main()
                    zfile = ZipFile(f'{urllib.parse.urlparse(url).path.split("/")[-1]}.zip')
                    dfs = {f.filename: pd.read_csv(zfile.open(f)) for f in zfile.infolist() if Path(f.filename).suffix in [".csv"]}
                    # fmt: on
                    
                    df_subway = dfs['nyc-transit-subway-entrance-and-exit-data.csv']
                    
                    from shapely.geometry import Point, MultiPoint
                    from shapely.ops import nearest_points
                    import geopandas as gpd
                    
                    geometry = [Point(xy) for xy in zip(df_subway['Station Longitude'], df_subway['Station Latitude'])]
                    
                    # Coordinate reference system :
                    crs = {'init': 'EPSG:4326'}
                    
                    # Creating a Geographic data frame 
                    gdf_subway_entrance_geometry = gpd.GeoDataFrame(df_subway, crs=crs, geometry=geometry).to_crs('EPSG:5234')
                    gdf_subway_entrance_geometry
                    
                    df_yes_entry = gdf_subway_entrance_geometry
                    df_yes_entry = gdf_subway_entrance_geometry[gdf_subway_entrance_geometry.Entry=='YES']
                    df_yes_entry
                    
                    # randomly select a point....
                    gpdPoint = gdf_subway_entrance_geometry.sample(1).geometry.tolist()[0]
                    pts = MultiPoint(df_yes_entry['geometry'].values) # does not work with a geopandas series, works with a numpy array
                    pt = Point(gpdPoint.x, gpdPoint.y)
                    #[o.wkt for o in nearest_points(pt, pts)]
                    for o in nearest_points(pt, pts):
                      print(o)
                    

                    SwiftUI isn't loading data

                    copy iconCopydownload iconDownload
                    public struct Event: Codable, Hashable, Identifiable {
                        private enum CodingKeys: CodingKey {
                            case title
                            case start
                            case end
                        }
                    
                        public var id = UUID()
                        var title: String
                        var start: String
                        var end: String
                    }
                    

                    Error in pandas: "Buffer has wrong number of dimensions (expected 1, got 2)"

                    copy iconCopydownload iconDownload
                    >>> df.columns[df.eq(1).sum().ge(5)]
                    Index(['1'], dtype='object')
                    
                    df.loc[:, df.eq(1).sum().ge(5)]
                    
                       1
                    0  0
                    1  1
                    2  1
                    3  1
                    4  1
                    5  1
                    
                    (df.eq(1) # values equal to 1 -> True
                       .sum() # count number of True
                       .ge(5) # True if sum ≥ 5
                    )
                    
                    
                    def get_sum(data, list_of_items):
                        # I coded this return line, which worked according to one of the cells of the .ipynb file
                        return data.loc[:, list_of_items].all(axis = 'columns').sum()
                    
                    
                    def get_list(data):
                        
                        product_list = []
                    
                        for item in df.columns:
                            # I coded these two lines, which I am unable to test due to the error
                            if get_sum(data, [item]) >= 5:
                                product_list.append(item)
                        
                        return product_list
                    
                    >>> df.columns[df.eq(1).sum().ge(5)]
                    Index(['1'], dtype='object')
                    
                    df.loc[:, df.eq(1).sum().ge(5)]
                    
                       1
                    0  0
                    1  1
                    2  1
                    3  1
                    4  1
                    5  1
                    
                    (df.eq(1) # values equal to 1 -> True
                       .sum() # count number of True
                       .ge(5) # True if sum ≥ 5
                    )
                    
                    
                    def get_sum(data, list_of_items):
                        # I coded this return line, which worked according to one of the cells of the .ipynb file
                        return data.loc[:, list_of_items].all(axis = 'columns').sum()
                    
                    
                    def get_list(data):
                        
                        product_list = []
                    
                        for item in df.columns:
                            # I coded these two lines, which I am unable to test due to the error
                            if get_sum(data, [item]) >= 5:
                                product_list.append(item)
                        
                        return product_list
                    
                    >>> df.columns[df.eq(1).sum().ge(5)]
                    Index(['1'], dtype='object')
                    
                    df.loc[:, df.eq(1).sum().ge(5)]
                    
                       1
                    0  0
                    1  1
                    2  1
                    3  1
                    4  1
                    5  1
                    
                    (df.eq(1) # values equal to 1 -> True
                       .sum() # count number of True
                       .ge(5) # True if sum ≥ 5
                    )
                    
                    
                    def get_sum(data, list_of_items):
                        # I coded this return line, which worked according to one of the cells of the .ipynb file
                        return data.loc[:, list_of_items].all(axis = 'columns').sum()
                    
                    
                    def get_list(data):
                        
                        product_list = []
                    
                        for item in df.columns:
                            # I coded these two lines, which I am unable to test due to the error
                            if get_sum(data, [item]) >= 5:
                                product_list.append(item)
                        
                        return product_list
                    
                    >>> df.columns[df.eq(1).sum().ge(5)]
                    Index(['1'], dtype='object')
                    
                    df.loc[:, df.eq(1).sum().ge(5)]
                    
                       1
                    0  0
                    1  1
                    2  1
                    3  1
                    4  1
                    5  1
                    
                    (df.eq(1) # values equal to 1 -> True
                       .sum() # count number of True
                       .ge(5) # True if sum ≥ 5
                    )
                    
                    
                    def get_sum(data, list_of_items):
                        # I coded this return line, which worked according to one of the cells of the .ipynb file
                        return data.loc[:, list_of_items].all(axis = 'columns').sum()
                    
                    
                    def get_list(data):
                        
                        product_list = []
                    
                        for item in df.columns:
                            # I coded these two lines, which I am unable to test due to the error
                            if get_sum(data, [item]) >= 5:
                                product_list.append(item)
                        
                        return product_list
                    
                    >>> df.columns[df.eq(1).sum().ge(5)]
                    Index(['1'], dtype='object')
                    
                    df.loc[:, df.eq(1).sum().ge(5)]
                    
                       1
                    0  0
                    1  1
                    2  1
                    3  1
                    4  1
                    5  1
                    
                    (df.eq(1) # values equal to 1 -> True
                       .sum() # count number of True
                       .ge(5) # True if sum ≥ 5
                    )
                    
                    
                    def get_sum(data, list_of_items):
                        # I coded this return line, which worked according to one of the cells of the .ipynb file
                        return data.loc[:, list_of_items].all(axis = 'columns').sum()
                    
                    
                    def get_list(data):
                        
                        product_list = []
                    
                        for item in df.columns:
                            # I coded these two lines, which I am unable to test due to the error
                            if get_sum(data, [item]) >= 5:
                                product_list.append(item)
                        
                        return product_list
                    

                    AttributeError: could not import keras and segmentation models

                    copy iconCopydownload iconDownload
                    import tensorflow as tf
                    from tensorflow import keras 
                    tf.compat.v1.enable_eager_execution()
                    import segmentation_models as sm
                    import glob
                    import cv2
                    import os
                    import numpy as np
                    from matplotlib import pyplot as plt
                    #import keras 
                    from tensorflow.keras.utils import normalize
                    from tensorflow.keras.metrics import MeanIoU
                    

                    Updating packages in conda

                    copy iconCopydownload iconDownload
                    pip install torch-cluster --upgrade
                    

                    Jupyter Notebook Cannot Connect to Kernel, Likely due to Zipline / AssertionError

                    copy iconCopydownload iconDownload
                    # Create environment
                    conda create -n zipline_env python=3.6 ipykernel
                    
                    # Activate environment, make sure you can see it in jupyter notebooks
                    conda activate zipline_env
                    python -m ipykernel install --user --name=zipline_env
                    
                    # Install Zipline
                    conda install -c conda-forge zipline
                    

                    Community Discussions

                    Trending Discussions on intervaltree
                    • No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib'
                    • PyObjc error while trying to deploy flask app on Heroku
                    • UnsatisfiableError on importing environment pywin32==300 (Requested package -> Available versions)
                    • Evaluating list similarities
                    • dictionary like data structure with ordered keys and selection between interval of key values
                    • Remove rows from dataframe if it has partial match with other rows for specific columns
                    • Multipoint(df['geometry']) key error from dataframe but key exist. KeyError: 13 geopandas
                    • SwiftUI isn't loading data
                    • Error in pandas: "Buffer has wrong number of dimensions (expected 1, got 2)"
                    • AttributeError: could not import keras and segmentation models
                    Trending Discussions on intervaltree

                    QUESTION

                    No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib'

                    Asked 2022-Mar-13 at 16:13

                    I am trying to build an app from a python file (Mac OS) using the py2app extension. I have a folder with the python file and the "setup.py" file.

                    • I first tested the app by running python setup.py py2app -A in the terminal and the dist and build folder are successfully created and the app works when launched.
                    • Now when I try to build it non-locally by running the command python setup.py py2app in the terminal, there are various "WARNING: ImportERROR" messages while building and finally a error: [Errno 2] No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib' error.

                      How can I fix this? I've tried to delete anaconda fully as I don't use it but it seems to still want to run through it. Additionally, I have tried to run the build command using a virtual environment but I end up having even more import errors.
                      *I Left out a lot of the "skipping" and "warning" lines using "..." for space
                    (base) keshavshankar@Keshavs-MacBook-Pro IEEE Citation Creator % python setup.py py2app
                    running py2app
                    creating /Users/keshavshankar/Desktop/IEEE Citation Creator/build/bdist.macosx-10.9-x86_64/python3.8-standalone
                    creating /Users/keshavshankar/Desktop/IEEE Citation Creator/build/bdist.macosx-10.9-x86_64/python3.8-standalone/app
                    creating /Users/keshavshankar/Desktop/IEEE Citation Creator/build/bdist.macosx-10.9-x86_64/python3.8-standalone/app/collect
                    creating /Users/keshavshankar/Desktop/IEEE Citation Creator/build/bdist.macosx-10.9-x86_64/python3.8-standalone/app/temp
                    creating build/bdist.macosx-10.9-x86_64/python3.8-standalone/app/lib-dynload
                    creating build/bdist.macosx-10.9-x86_64/python3.8-standalone/app/Frameworks
                    --- Skipping recipe PIL ---
                    *** using recipe: automissing *** {'expected_missing_imports': {'winreg', '_frozen_importlib_external', 'sys.getwindowsversion', '_winapi', 'nt'}}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    CTYPES USERS [SourceModule('ctypes._endian', '/opt/anaconda3/lib/python3.8/ctypes/_endian.py'), Package('test.support', '/opt/anaconda3/lib/python3.8/test/support/__init__.py', ['/opt/anaconda3/lib/python3.8/test/support']), SourceModule('ctypes.wintypes', '/opt/anaconda3/lib/python3.8/ctypes/wintypes.py'), Package('ctypes.macholib', '/opt/anaconda3/lib/python3.8/ctypes/macholib/__init__.py', ['/opt/anaconda3/lib/python3.8/ctypes/macholib']), SourceModule('ctypes._aix', '/opt/anaconda3/lib/python3.8/ctypes/_aix.py'), SourceModule('ctypes.util', '/opt/anaconda3/lib/python3.8/ctypes/util.py'), SourceModule('multiprocessing.sharedctypes', '/opt/anaconda3/lib/python3.8/multiprocessing/sharedctypes.py'), Script('/Users/keshavshankar/Desktop/IEEE Citation Creator/.eggs/py2app-0.27-py3.8.egg/py2app/bootstrap/argv_emulation.py',)]
                    *** using recipe: ctypes *** {'prescripts': ['py2app.bootstrap.ctypes_setup']}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    *** using recipe: detect_dunder_file *** {'packages': {'certifi'}}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    *** using recipe: ftplib *** {}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    --- Skipping recipe gcloud ---
                    --- Skipping recipe lxml ---
                    --- Skipping recipe matplotlib ---
                    *** using recipe: multiprocessing *** {'prescripts': [<_io.StringIO object at 0x7f9221c91ca0>]}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    --- Skipping recipe gcloud ---
                    --- Skipping recipe lxml ---
                    --- Skipping recipe matplotlib ---
                    --- Skipping recipe opencv ---
                    --- Skipping recipe pandas ---
                    --- Skipping recipe platformdirs ---
                    --- Skipping recipe pydantic ---
                    *** using recipe: pydoc *** {}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe autopackages ---
                    --- Skipping recipe gcloud ---
                    --- Skipping recipe lxml ---
                    --- Skipping recipe matplotlib ---
                    --- Skipping recipe opencv ---
                    --- Skipping recipe pandas ---
                    --- Skipping recipe platformdirs ---
                    --- Skipping recipe pydantic ---
                    --- Skipping recipe pyenchant ---
                    --- Skipping recipe pygame ---
                    --- Skipping recipe pylsp ---
                    --- Skipping recipe pyopengl ---
                    --- Skipping recipe pyside ---
                    --- Skipping recipe pyside2 ---
                    --- Skipping recipe pyside6 ---
                    --- Skipping recipe qt5 ---
                    --- Skipping recipe qt6 ---
                    --- Skipping recipe rtree ---
                    *** using recipe: setuptools *** {'expected_missing_imports': {'pkg_resources.extern.packaging', '__main__.__requires__', '__builtin__', 'pkg_resources.extern.pyp', 'pkg_resources.extern.app'}}
                    --- Skipping recipe PIL ---
                    ...
                    --- Skipping recipe shiboken6 ---
                    sip: packages: {'pasta', 'cytoolz-0.11.0-py3.8.egg-info', 'defusedxml', 'cv2', 'PyQt5.QtTest', 'unicodecsv-0.14.1-py3.8.egg-info', '_distutils_hack', 'ipywidgets', 'mypy_extensions-0.4.3-py3.8.egg-info', 'PyQt5', 'qtconsole', 'anaconda_project-0.9.1.dist-info', 'imageio-2.9.0.dist-info', 'conda_env', 'isort-5.8.0.dist-info', 'Markdown-3.3.4.dist-info', 'kiwisolver-1.3.1.dist-info', 'ipython_genutils', 'pytz-2021.1.dist-info', 'Cython', 'opt_einsum-3.3.0.dist-info', 'dateutil', 'bitarray', 'scikit_image-0.18.1-py3.8.egg-info', 'chardet', 'pyls_spyder-0.3.2.dist-info', 'sphinxcontrib_htmlhelp-1.0.3.dist-info', 'google', 'lazy_object_proxy-1.6.0.dist-info', 'PyQt5.QtSql', 'diff_match_patch', 'navigator_updater', 'pandas-1.2.4-py3.8.egg-info', 'astropy-4.2.1.dist-info', 'wrapt', 'nest_asyncio-1.5.1.dist-info', 'html5lib', 'Pillow-8.3.1.dist-info', 'anaconda_client-1.7.2.dist-info', 'xlwings-0.23.0-py3.8.egg-info', 'docutils', 'PyQt5.QtWidgets', 'tlz', 'appdirs-1.4.4-py3.6.egg-info', 'ruamel_yaml_conda-0.15.100.dist-info', 'PyQt5.QtSvg', 'curl', 'alabaster', 'PyQt5.QtMultimediaWidgets', 'bokeh', 'pyasn1_modules-0.2.8.dist-info', 'PyQt5.pyrcc', 'astroid', 'three_merge-0.1.1.dist-info', 'sklearn', 'seaborn', 'mpmath', 'et_xmlfile-1.0.1-py3.7.egg-info', 'cachetools', 'testpath', 'pylint-2.7.4.dist-info', 'MarkupSafe-1.1.1.dist-info', 'requests_oauthlib-1.3.0.dist-info', 'bitarray-1.9.2.dist-info', 'xlwings', 'mkl_fft-1.3.0-py3.8.egg-info', 'numba-0.53.1.dist-info', 'PyQt5.QtWebEngineWidgets', 'patsy-0.5.1-py3.8.egg-info', 'tensorflow-2.5.0.dist-info', 'nbclient', 'astropy', 'spyder-4.2.5.dist-info', 'mpl_toolkits', 'docutils-0.17.dist-info', 'numpydoc', 'absl', 'diff_match_patch-20200713.dist-info', 'pyflakes-2.2.0.dist-info', 'tornado-6.1.dist-info', 'zict-2.0.0-py3.9.egg-info', 'cachetools-4.2.2.dist-info', 'pytest-6.2.3-py3.8.egg-info', 'sortedcollections', 'jupyterlab_widgets-1.0.0.dist-info', 'lxml', 'pathspec-0.7.0.dist-info', 'setuptools-52.0.0.post20210125-py3.8.egg-info', 'qtpy', 'matplotlib-3.3.4.dist-info', 'bkcharts-0.2-py3.8.egg-info', 'mkl_service-2.3.0-py3.8.egg-info', 'applaunchservices', 'toolz-0.11.1.dist-info', 'PyQt5.QtNfc', 'PyQt5.QtPositioning', 'setuptools', 'pyparsing-2.4.7.dist-info', 'keyring-22.3.0.dist-info', 'PyYAML-5.4.1-py3.8-macosx-10.9-x86_64.egg-info', 'distributed', 'conda_build', 'jinja2', 'webencodings', 'conda_content_trust', 'threadpoolctl-2.1.0.dist-info', 'colorama-0.4.4.dist-info', 'Sphinx-4.0.1.dist-info', 'numpy-1.20.1.dist-info', 'pluggy-0.13.1.dist-info', 'PyQt5.QtQml', 'pyls_black-0.4.6.dist-info', 'statsmodels', 'gevent', 'PyQt5.QtQuickWidgets', 'pathtools-0.1.2.dist-info', 'ipython_genutils-0.2.0.dist-info', 'tables', 'numexpr-2.7.3.dist-info', 'typed_ast-1.4.2.dist-info', 'ipython-7.22.0.dist-info', 'terminado-0.9.4-py3.8.egg-info', 'gast-0.4.0.dist-info', 'psutil-5.8.0.dist-info', 'six-1.15.0.dist-info', 'Flask-1.1.2.dist-info', 'mkl_random', 'glob2', 'certifi', 'olefile', 'scikit_learn-0.24.1.dist-info', 'gast', 'pathlib2', 'keras_nightly-2.5.0.dev2021032900.dist-info', 'lxml-4.6.3.dist-info', 'requests_oauthlib', 'jupyter_client', 'et_xmlfile', 'tensorboard_data_server-0.6.1.dist-info', 'llvmlite-0.36.0-py3.8.egg-info', 'spyder', 'keras', 'erfa', 'snowballstemmer-2.1.0.dist-info', 'scripts', 'PyQt5.QtGui', 'Jinja2-2.11.3.dist-info', 'HeapDict-1.0.1.dist-info', 'atomicwrites', 'flake8-3.9.0.dist-info', 'requests-2.25.1.dist-info', 'jupyter_console-6.4.0.dist-info', 'anaconda_clean-1.0.dist-info', 'dask-2021.4.0.dist-info', 'tensorboard_plugin_wit-1.8.0.dist-info', 'libfuturize', 'urllib3', 'singledispatch', 'xmltodict-0.12.0.dist-info', 'sphinxcontrib', 'PyQt5.QtDesigner', 'conda_repo_cli-1.0.4.dist-info', 'conda_verify-3.4.2.dist-info', 'sphinxcontrib_jsmath-1.0.1.dist-info', 'sniffio-1.2.0.dist-info', 'libarchive', 'soupsieve-2.2.1.dist-info', 'binstar_client', 'Rtree-0.9.7.dist-info', 'glob2-0.7.dist-info', 'clyent-1.2.2-py3.8.egg-info', 'PyQt5.QtWebEngineCore', 'urllib3-1.26.4.dist-info', 'jupyterlab_pygments', 'py', 'docs', 'grpc', 'partd-1.2.0.dist-info', 'PyQt5.uic', 'yapf-0.31.0.dist-info', 'pyls_spyder', 'wrapt-1.12.1.dist-info', 'greenlet-1.0.0.dist-info', 'argon2_cffi-20.1.0.dist-info', 'astroid-2.5.dist-info', 'pycparser', 'PyQt5.QtSensors', 'conda-4.11.0-py3.8.egg-info', 'boto', 'path', 'cffi', 'imagesize-1.2.0.dist-info', 'sympy-1.8.dist-info', 'itsdangerous', 'msgpack-1.0.2.dist-info', 'jupyter_packaging', 'distributed-2021.4.0.dist-info', 'Keras_Preprocessing-1.1.2.dist-info', 'XlsxWriter-1.3.8.dist-info', 'pyasn1-0.4.8.dist-info', 'async_generator', 'sphinxcontrib_applehelp-1.0.2.dist-info', 'nose', 'mock', 'wcwidth', 'termcolor-1.1.0.dist-info', 'flatbuffers', 'Werkzeug-1.0.1.dist-info', 'rope-0.18.0.dist-info', 'autopep8-1.5.6.dist-info', 'tensorboard', '_pytest', 'ptyprocess', 'markdown', 'jupyterlab_widgets', 'soupsieve', 'idna', 'path-15.1.2.dist-info', 'regex', 'tblib-1.7.0.dist-info', 'cryptography-3.4.7.dist-info', 'yapf', 'jedi-0.17.2.dist-info', 'pkginfo-1.7.0-py3.8.egg-info', 'sqlalchemy', 'tifffile', 'ujson-4.0.2.dist-info', 'prompt_toolkit', 'black-19.10b0.dist-info', 'nose-1.3.7.dist-info', 'numpy', 'parso-0.7.0.dist-info', 'backports.tempfile-1.0.dist-info', 'filelock-3.0.12.dist-info', 'pytesseract', 'pandocfilters-1.4.3.dist-info', 'jupyter_server-1.4.1.dist-info', 'attr', 'anyio', 'argh', 'atomicwrites-1.4.0.dist-info', 'google_pasta-0.2.0.dist-info', 'SQLAlchemy-1.4.7.dist-info', 'typing_extensions-3.7.4.3.dist-info', 'Babel-2.9.0.dist-info', 'PyQt5.QtWebSockets', 'PyQt5.QtSerialPort', 'pathspec', 'requests', 'python_jsonrpc_server-0.4.0.dist-info', 'joblib', 'tqdm-4.59.0.dist-info', 'toml', 'PyQt5.QtPrintSupport', 'pylint', 'markupsafe', 'Cython-0.29.23.dist-info', 'jupyterlab_server', 'watchdog-1.0.2.dist-info', 'mkl_random-1.2.1.dist-info', 'jsonschema', 'more_itertools-8.7.0.dist-info', 'tensorboard-2.5.0.dist-info', 'certifi-2021.10.8-py3.8.egg-info', 'mock-4.0.3.dist-info', 'more_itertools', 'rsa-4.7.2.dist-info', 'asn1crypto', 'jupyter_client-6.1.12.dist-info', 'importlib_metadata', 'asn1crypto-1.4.0.dist-info', 'pexpect-4.8.0.dist-info', 'packaging-20.9.dist-info', 'python_dateutil-2.8.1.dist-info', 'webencodings-0.5.1-py3.8.egg-info', 'pyOpenSSL-20.0.1.dist-info', 'xlwt', 'spyder_kernels-1.10.2.dist-info', 'itsdangerous-1.1.0.dist-info', 'nbformat', 'jupyter-1.0.0.dist-info', 'oauthlib-3.1.1.dist-info', 'pygments', 'py-1.10.0.dist-info', 'sniffio', 'mistune-0.8.4.dist-info', 'PIL', 'fsspec', 'argh-0.26.2-py3.8.egg-info', 'rtree', 'xlwt-1.3.0-py3.8.egg-info', 'numba', 'notebook', 'unicodecsv', 'bs4', 'qtawesome', 'PyQt5.QtNetwork', 'backports.functools_lru_cache-1.6.4.dist-info', 'scipy-1.6.2.dist-info', 'sympy', 'jdcal-1.4.1.dist-info', 'json5', 'testpath-0.4.4.dist-info', 'brotli', 'anaconda_navigator', 'sklearn-0.0.dist-info', 'PyQt5.QtXmlPatterns', 'pyasn1_modules', 'lazy_object_proxy', 'libarchive_c-2.9.dist-info', 'prometheus_client-0.10.1.dist-info', 'PyQt5.QtXml', 'pyrsistent', 'xlrd', 'iniconfig-1.1.1.dist-info', 'flake8', 'aeosa', 'qtconsole-5.0.3.dist-info', 'networkx', 'PyQt5.QtLocation', 'PyQt5.QtBluetooth', 'pandas', 'python_language_server-0.36.2.dist-info', 'mkl_fft', 'applaunchservices-0.2.1.dist-info', 'conda_package_handling-1.7.3.dist-info', 'attrs-20.3.0.dist-info', 'werkzeug', 'appnope', 'chardet-4.0.0.dist-info', 'QDarkStyle-2.8.1.dist-info', 'statsmodels-0.12.2.dist-info', 'brotlipy-0.7.0-py3.8.egg-info', 'three_merge', 'babel', 'click', 'jupyterlab', 'beautifulsoup4-4.9.3.dist-info', 'pyzmq-20.0.0-py3.8.egg-info', 'protobuf-3.17.3.dist-info', 'nbclassic', 'cryptography', 'pathtools', 'pickleshare-0.7.5.dist-info', 'QtPy-1.9.0.dist-info', 'past', 'openpyxl-3.0.7.dist-info', 'llvmlite', 'conda_token-0.3.0.dist-info', 'cffi-1.14.5.dist-info', 'tensorflow', 'multipledispatch', 'typed_ast', 'h5py', 'zope', 'pytest', 'PyQt5.QtHelp', 'astunparse-1.6.3.dist-info', 'flask', 'bleach', 'partd', 'importlib_metadata-3.10.0.dist-info', 'simplegeneric-0.8.1-py3.8.egg-info', 'skimage', 'Send2Trash-1.5.0.dist-info', 'matplotlib', 'jupyter_server', 'ply-3.11-py3.8.egg-info', 'nbclient-0.5.3.dist-info', 'PyQt5.Qt', 'widgetsnbextension-3.5.1.dist-info', 'tensorboard_plugin_wit', 'pep8-1.7.1-py3.8.egg-info', 'isort', 'rope', 'clyent', 'h5py-2.10.0.dist-info', 'ipykernel', 'blib2to3', 'pyflakes', 'cv-1.0.0.dist-info', 'prompt_toolkit-3.0.17.dist-info', 'tensorflow_estimator', 'zope.interface-5.3.0.dist-info', 'async_generator-1.10.dist-info', 'ruamel_yaml', 'keyring', 'jupyter_packaging-0.7.12.dist-info', 'IPython', 'idna-2.10.dist-info', 'pydocstyle-6.0.0.dist-info', 'jsonschema-3.2.0.dist-info', 'rsa', 'seaborn-0.11.1.dist-info', 'pycodestyle-2.6.0.dist-info', 'nbclassic-0.2.6.dist-info', 'pluggy', 'PyQt5.QtWebChannel', 'fastcache', 'future', 'xlrd-2.0.1.dist-info', 'pycparser-2.20.dist-info', 'olefile-0.46.dist-info', 'jupyterlab-3.0.14.dist-info', 'prometheus_client', 'mpmath-1.2.1-py3.8.egg-info', 'click-7.1.2.dist-info', 'conda_content_trust-0+unknown.dist-info', 'numpy-1.19.5.dist-info', 'decorator-5.0.6.dist-info', '_yaml', 'joblib-1.0.1.dist-info', 'PyQt5.QtCore', 'send2trash', 'PyQt5.__pycache__', 'navigator_updater-0.2.1-py3.8.egg-info', 'imageio', 'greenlet', 'psutil', 'yaml', 'backports.shutil_get_terminal_size-1.0.0.dist-info', 'absl_py-0.13.0.dist-info', 'cycler-0.10.0-py3.8.egg-info', 'Bottleneck-1.3.2.dist-info', 'regex-2021.4.4.dist-info', 'backcall', 'ipywidgets-7.6.3.dist-info', 'jedi', 'numpydoc-1.1.0.dist-info', 'xlsxwriter', 'yapftests', 'Keras-2.4.3.dist-info', 'pexpect', 'cloudpickle', 'jupyterlab_pygments-0.1.2.dist-info', 'toml-0.10.2.dist-info', 'PyQt5.QtMacExtras', 'networkx-2.5.dist-info', 'msgpack', 'pyasn1', 'pip-21.0.1-py3.8.egg-info', 'tornado', 'conda_build-3.21.4-py3.8.egg-info', 'PyWavelets-1.1.1.dist-info', 'multipledispatch-0.6.0.dist-info', 'jupyter_core', 'repo_cli', 'wurlitzer-2.1.0.dist-info', 'backports', 'opt_einsum', 'widgetsnbextension', 'sphinxcontrib_websupport-1.2.4.dist-info', 'backcall-0.2.0.dist-info', 'sortedcontainers', 'traitlets', 'libpasteurize', 'appscript-1.1.2.dist-info', 'sphinxcontrib_serializinghtml-1.1.4.dist-info', 'zipp-3.4.1.dist-info', 'tqdm', 'future-0.18.2.dist-info', 'google_auth_oauthlib', 'spyder_kernels', 'jupyter_core-4.7.1.dist-info', 'contextlib2-0.6.0.post1.dist-info', 'packaging', 'Pygments-2.8.1.dist-info', 'pytz', 'pyodbc-4.0.0_unsupported.dist-info', 'parso', 'qdarkstyle', 'QtAwesome-1.0.2.dist-info', 'wcwidth-0.2.5.dist-info', 'nbconvert-6.0.7.dist-info', 'OpenSSL', 'pydocstyle', 'sortedcollections-2.1.0.dist-info', 'conda', 'PyQt5.QtDBus', 'pyls', 'oauthlib', 'iniconfig', 'PySocks-1.7.1.dist-info', 'PyQt5.QtWebEngine', 'jupyterlab_server-2.4.0.dist-info', 'pyrsistent-0.17.3.dist-info', 'appnope-0.1.2.dist-info', 'sphinxcontrib_devhelp-1.0.2.dist-info', 'conda_package_handling', 'gevent-21.1.2.dist-info', 'tables-3.6.1.dist-info', 'nbformat-5.1.3.dist-info', 'colorama', 'conda_verify', 'snowballstemmer', 'terminado', 'dask', 'pywt', 'zmq', 'pyximport', 'textdistance', 'tifffile-2020.10.1-py3.8.egg-info', 'nltk', 'pkginfo', 'PyQt5.pylupdate', 'backports.weakref-1.0.post1-py2.7.egg-info', 'pyls_jsonrpc', 'tensorboard_data_server', 'pkg_resources', 'scipy', 'watchdog', 'PyQt5.QtQuick', 'pytesseract-0.3.8.dist-info', 'wheel', 'google_auth-1.33.1.dist-info', 'ptyprocess-0.7.0.dist-info', 'patsy', 'toolz', 'pathlib2-2.3.5.dist-info', 'PyQt5.QtOpenGL', 'anaconda_navigator-2.0.3-py3.8.egg-info', 'traitlets-5.0.5.dist-info', 'textdistance-4.2.1.dist-info', 'intervaltree', 'grpcio-1.34.1.dist-info', 'wheel-0.36.2-py3.6.egg-info', 'cytoolz', 'json5-0.9.5.dist-info', 'openpyxl', 'mkl', 'google_auth_oauthlib-0.4.4.dist-info', 'bokeh-2.3.2.dist-info', 'intervaltree-3.1.0.dist-info', 'bottleneck', 'jupyter_console', 'xontrib', 'mccabe-0.6.1-py3.8.egg-info', 'cloudpickle-1.6.0.dist-info', 'ply', 'tensorflow_estimator-2.5.0.dist-info', 'ipykernel-5.3.4.dist-info', 'keras_preprocessing', 'bkcharts', 'astunparse', 'alabaster-0.7.12.dist-info', 'argon2', 'sortedcontainers-2.3.0.dist-info', 'nbconvert', 'opencv_python-4.5.3.56.dist-info', 'fastcache-1.1.0.dist-info', 'tblib', 'pyls_black', 'anyio-2.2.0.dist-info', 'pip', 'sphinxcontrib_qthelp-1.0.3.dist-info', '__pycache__', 'zict', 'html5lib-1.1.dist-info', 'numexpr', 'notebook-6.3.0.dist-info', 'bleach-3.3.0.dist-info', 'defusedxml-0.7.1.dist-info', 'h5py-3.1.0.dist-info', 'flatbuffers-1.12.dist-info', 'zope.event-4.5.0-py3.8.egg-info', 'conda_token', 'nltk-3.6.1.dist-info', 'fsspec-0.9.0.dist-info', 'anaconda_project', 'pyerfa-1.7.3.dist-info', 'sphinx', 'singledispatch-0.0.0.dist-info'}
                    WARNING: ImportError in sip recipe ignored: No module named cytoolz-0
                    WARNING: ImportError in sip recipe ignored: No module named unicodecsv-0
                    WARNING: ImportError in sip recipe ignored: No module named mypy_extensions-0
                    WARNING: ImportError in sip recipe ignored: No module named anaconda_project-0
                    WARNING: ImportError in sip recipe ignored: No module named imageio-2
                    -:1114: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    /opt/anaconda3/lib/python3.8/site-packages/boto/iam/connection.py:1114: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if tld is 'default':
                    WARNING: ImportError in sip recipe ignored: No module named isort-5
                    WARNING: ImportError in sip recipe ignored: No module named Markdown-3
                    ...
                    WARNING: ImportError in sip recipe ignored: No module named sniffio-1
                    WARNING: ImportError in sip recipe ignored: No module named soupsieve-2
                    -:124: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    -:130: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    /opt/anaconda3/lib/python3.8/site-packages/binstar_client/requests_ext.py:124: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if mode is 0:
                    /opt/anaconda3/lib/python3.8/site-packages/binstar_client/requests_ext.py:130: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      elif mode is 2:
                    WARNING: ImportError in sip recipe ignored: No module named Rtree-0
                    WARNING: ImportError in sip recipe ignored: No module named glob2-0
                    ...
                    WARNING: ImportError in sip recipe ignored: No module named backcall-0
                    WARNING: ImportError in sip recipe ignored: No module named appscript-1
                    WARNING: ImportError in sip recipe ignored: No module named sphinxcontrib_serializinghtml-1
                    WARNING: ImportError in sip recipe ignored: No module named zipp-3
                    WARNING: ImportError in sip recipe ignored: No module named future-0
                    ...
                    WARNING: ImportError in sip recipe ignored: No module named cloudpickle-1
                    WARNING: ImportError in sip recipe ignored: No module named tensorflow_estimator-2
                    WARNING: ImportError in sip recipe ignored: No module named ipykernel-5
                    -:478: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                    /opt/anaconda3/lib/python3.8/site-packages/bkcharts/utils.py:478: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                      if lev is not '' and row_text == '':
                    -:615: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                    -:615: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                    /opt/anaconda3/lib/python3.8/site-packages/bkcharts/data_source.py:615: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                      k is not 'dims' and k is not 'required_dims']
                    /opt/anaconda3/lib/python3.8/site-packages/bkcharts/data_source.py:615: SyntaxWarning: "is not" with a literal. Did you mean "!="?
                      k is not 'dims' and k is not 'required_dims']
                    -:452: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    /opt/anaconda3/lib/python3.8/site-packages/bkcharts/builder.py:452: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if attr is 'label':
                    WARNING: ImportError in sip recipe ignored: No module named alabaster-0
                    WARNING: ImportError in sip recipe ignored: No module named sortedcontainers-2
                    WARNING: ImportError in sip recipe ignored: No module named opencv_python-4
                    WARNING: ImportError in sip recipe ignored: No module named fastcache-1
                    WARNING: ImportError in sip recipe ignored: No module named anyio-2
                    WARNING: ImportError in sip recipe ignored: No module named sphinxcontrib_qthelp-1
                    WARNING: ImportError in sip recipe ignored: No module named html5lib-1
                    WARNING: ImportError in sip recipe ignored: No module named notebook-6
                    WARNING: ImportError in sip recipe ignored: No module named bleach-3
                    WARNING: ImportError in sip recipe ignored: No module named defusedxml-0
                    WARNING: ImportError in sip recipe ignored: No module named h5py-3
                    WARNING: ImportError in sip recipe ignored: No module named flatbuffers-1
                    WARNING: ImportError in sip recipe ignored: No module named nltk-3
                    WARNING: ImportError in sip recipe ignored: No module named fsspec-0
                    WARNING: ImportError in sip recipe ignored: No module named pyerfa-1
                    WARNING: ImportError in sip recipe ignored: No module named singledispatch-0
                    *** using recipe: sip *** {'resources': ['/Users/keshavshankar/Desktop/IEEE Citation Creator/.eggs/py2app-0.27-py3.8.egg/py2app/recipes/qt.conf']}
                    --- Skipping recipe PIL ---
                    *** using recipe: autopackages *** {'packages': ['docutils', 'pylint', 'h5py', 'numpy', 'scipy', 'tensorflow']}
                    -:142: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    -:144: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    -:146: SyntaxWarning: "is" with a literal. Did you mean "=="?
                    /opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/benchmarks/benchmark_util.py:142: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if 'x' is None:
                    /opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/benchmarks/benchmark_util.py:144: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if 'optimizer' is None:
                    /opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/benchmarks/benchmark_util.py:146: SyntaxWarning: "is" with a literal. Did you mean "=="?
                      if 'loss' is None:
                    --- Skipping recipe PIL ---
                    --- Skipping recipe gcloud ---
                    *** using recipe: lxml *** {}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe gcloud ---
                    *** using recipe: matplotlib *** {'resources': ['/opt/anaconda3/lib/python3.8/site-packages/matplotlib/mpl-data'], 'packages': ['matplotlib']}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe gcloud ---
                    *** using recipe: opencv *** {'includes': ['numpy']}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe gcloud ---
                    *** using recipe: pandas *** {'includes': ['pandas._libs.tslibs.base']}
                    --- Skipping recipe PIL ---
                    --- Skipping recipe gcloud ---
                    --- Skipping recipe platformdirs ---
                    --- Skipping recipe pydantic ---
                    --- Skipping recipe pyenchant ---
                    --- Skipping recipe pygame ---
                    --- Skipping recipe pylsp ---
                    --- Skipping recipe pyopengl ---
                    --- Skipping recipe pyside ---
                    --- Skipping recipe pyside2 ---
                    --- Skipping recipe pyside6 ---
                    --- Skipping recipe qt5 ---
                    --- Skipping recipe qt6 ---
                    error: [Errno 2] No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib'
                    

                    ANSWER

                    Answered 2022-Mar-13 at 16:13

                    The error error: [Errno 2] No such file or directory: '/opt/anaconda3/lib/python3.8/site-packages/rtree/lib' was caused by py2app trying to build the program bundle using a non-existent interpreter. This means that even if you try to uninstall a manager like Anaconda, it still has option logs somewhere on your mac.

                    The fix:

                    1. Open the terminal and type the command type -a python.
                    • You will see similar lines
                    python is /opt/anaconda3/bin/python
                    python is /usr/local/bin/python
                    python is /usr/bin/python
                    
                    • In my case, anaconda has been deleted but not fully, thus building the app through it corrupts the process. We want to delete anaconda fully
                    1. To delete this, open the location of anaconda by typing open /opt/ in the terminal
                    • Doing this will display the different environments on your computer. Delete the entire folder of the environment you don't want
                    1. Now retype type -a python in the terminal and you should see the anaconda path gone
                    • After this, simply rebuild your app in alias mode to make sure it works, then progress with the full build.

                    Source https://stackoverflow.com/questions/71414043

                    Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                    Vulnerabilities

                    No vulnerabilities reported

                    Install intervaltree

                    You can download it from GitHub.
                    You can use intervaltree like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the intervaltree component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

                    Support

                    For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                    DOWNLOAD this Library from

                    Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                    over 430 million Knowledge Items
                    Find more libraries
                    Reuse Solution Kits and Libraries Curated by Popular Use Cases
                    Explore Kits

                    Save this library and start creating your kit

                    Explore Related Topics

                    Share this Page

                    share link
                    Consider Popular Dataset Libraries
                    Try Top Libraries by kevinjdolan
                    Compare Dataset Libraries with Highest Support
                    Compare Dataset Libraries with Highest Quality
                    Compare Dataset Libraries with Highest Security
                    Compare Dataset Libraries with Permissive License
                    Compare Dataset Libraries with Highest Reuse
                    Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                    over 430 million Knowledge Items
                    Find more libraries
                    Reuse Solution Kits and Libraries Curated by Popular Use Cases
                    Explore Kits

                    Save this library and start creating your kit

                    • © 2022 Open Weaver Inc.