dtaidistance | Time series distances : Dynamic Time Warping | Time Series Database library
kandi X-RAY | dtaidistance Summary
kandi X-RAY | dtaidistance Summary
Time series distances: Dynamic Time Warping (fast DTW implementation in C)
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Plot the structure
- Get the min and max values of the series
- Return the linkage of a node
- Print the build extensions
- Check for OpenMP availability
- Check if clang is in clang
- R Plot two paths
- Find the best path in a list
- Estimate the tree
- Computes the entropy of the targets
- Performs clustering
- Plot the series
- Evaluate needleman - Wunsch
- Make a substitution function
- Calculate the distance between two paths
- Return a dict of all the available targets
- Compute the linkage tree
- Fast distance matrix
- Warp between two vectors
- Compute weights for the given series
- Calculate the distance from a series t
- Generate a template
- Return a dictionary of the available targets
- Calculate the path probability based on warping_path
- Test if the test fails
- Fast path to warping path
- Return the start and end segment of the alignment
dtaidistance Key Features
dtaidistance Examples and Code Snippets
Community Discussions
Trending Discussions on dtaidistance
QUESTION
I have a dataset of shape(700000,20) and I want to apply KNN to it.
However on testing it takes really huge time,can someone expert please help to let me know how can I reduce the KNN predicting time.
Is there something like GPU-KNN or something.Please help to let me know.
Below is the code I am using.
...ANSWER
Answered 2022-Jan-01 at 22:19I can suggest reducing the number of features which i think its 20 features from your dataset shape, Which mean you have 20 dimensions.
You can reduce the number of features by using PCA (Principal Component Analysis) like the following:
QUESTION
The DTAIDistance
package can be used to find k
best matches of the input query. but it cannot be used for multi-dimensional input query. moreover, I want to find the k
best matches of many input queries in one run.
I modified the DTAIDistance
function so that it can be used to search subsequences of multi-dimensions of multi-queries. I use njit
with parallel to speed up the process,i.e.the p_calc function which applies numba-parallel to each of the input query. but I find that the parallel calculation seems not to speed up the calculation compared to just simply looping over the input queries one by one, i.e. the calc function.
ANSWER
Answered 2021-Aug-16 at 21:00I assume the code of both implementations are correct and as been carefully checked (otherwise the benchmark would be pointless).
The issue likely comes from the compilation time of the function. Indeed, the first call is significantly slower than next calls, even with cache=True
. This is especially important for the parallel implementation as compiling parallel Numba code is often slower (since it is more complex). The best solution to avoid this is to compile Numba functions ahead of time by providing types to Numba.
Besides this, benchmarking a computation only once is usually considered as a bad practice. Good benchmarks perform multiple iterations and remove the first ones (or consider them separately). Indeed, several other problems can appear when a code is executed for the first time: CPU caches (and the TLB) are cold, the CPU frequency can change during the execution and is likely smaller when the program is just started, page faults may need to be needed, etc.
In practice, I cannot reproduce the issue. Actually, p_calc
is 3.3 times faster on my 6-core machine. When the benchmark is done in a loop of 5 iterations, the measured time of the parallel implementation is much smaller: about 13 times (which is actually suspicious for a parallel implementation using 6 threads on a 6-core machine).
QUESTION
I am trying to use the DTW algorithm from the Similarity Measures library. However, I get hit with an error that states a 2-Dimensional Array is required. I am not sure I understand how to properly format the data, and the documentation is leaving me scratching my head.
https://github.com/cjekel/similarity_measures/blob/master/docs/similaritymeasures.html
According to the documentation the function takes two arguments (exp_data and num_data ) for the data set, which makes sense. What doesn't make sense to me is:
exp_data : array_like
Curve from your experimental data. exp_data is of (M, N) shape, where M is the number of data points, and N is the number of dimensions
This is the same for both the exp_data and num_data arguments.
So, for further clarification, let's say I am implementing the fastdtw library. It looks like this:
...ANSWER
Answered 2021-Jun-01 at 17:44It appears the solution in my case was to include the index in the array. For example, if your data looks like this:
QUESTION
Is there a way to use MinGW as a substitute of MS Visual C++? A lot of Python packages need VS C++ to be installed: 4.5 GB of disk space! MinGW takes only 450 MB and reaches the same aim to compile C/C++.
I am using Visual Studio Code, and I try to avoid the Microsoft Visual C++ installation that is proposed here under 3) --> You can also install just the C++ Build Tools: https://code.visualstudio.com/docs/cpp/config-msvc/#_prerequisites
Perhaps there is just a trick needed to imitate MS Visual C++ with MinGW, so that the Python packages directly find the MinGW compiler as if it were MS Visual C++? Perhaps adding symlinks to the lib directory and adding some system variable path?
My question is strongly linked with Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat)
The error that I get when I install a package that needs MS Visual C++ as an example, installing pip install dtaidistance
:
ANSWER
Answered 2020-Aug-11 at 11:19There is no answer.
- MSVC
I have sent feedback to them, yet I did not get any reply. A Python developer assured that they know about this size issue anyway and do not like it either. The only chance is a change from MSVC developers themselves. It is unlikely, but not impossible, that the size will be reduced in future releases by the MSVC team.
- Python distutils workaround
The Python community will not provide a distutils workaround, see https://discuss.python.org/t/imitate-visual-c-with-mingw-or-other-c-compilers-for-python-packages-based-on-visual-c/4609/11.
Quote from the Python forum:
There was a workaround until Python 3.4 which might also be an approach now: Use MinGW compiler till Python 3.4 by adding a “distutils.cfg” to the folder “\Lib\distutils” in Python install directory. It would be nice to have that MinGW “distutils.cfg” workaround for the recent Python versions as well.
Now it turns out that distutils will not be a realistic workaround.
- There is no one who will work on it. A Python developer who was involved in the project before: Maybe there is ...
... someone else who might offer to help. But I wouldn’t be too optimistic.
- And a deprecation issue:
As an aside, now that setuptools has fully taken on distutils, we’ll be deprecating it in the standard library (soon). So this request in future would have to be made to each project implementing a build tool.
QUESTION
I have 6 timeseries values as follows.
...ANSWER
Answered 2020-Jun-05 at 09:14Everything is correct. As per the docs:
The result is stored in a matrix representation. Since only the upper triangular matrix is required this representation uses more memory then necessary.
All diagonal elements are 0 the the lower triangular matrix is the the same as the upper triagular matrix mirrored at the diagonal. As all these value can be deducted from the upper triangular matrix they aren't shown in the output.
You can even use the compact=True
argument to only get the values from the upper diagonal matrix concatenated into a 1D array.
You can convert the result to a full matrix like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dtaidistance
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page