scikit-garden | A garden for scikit-learn compatible trees | Machine Learning library
kandi X-RAY | scikit-garden Summary
kandi X-RAY | scikit-garden Summary
A garden for scikit-learn compatible trees
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Generate API documentation
- Return the docstring of an object
- Generate documentation for a function
- Replace docstrings in a paragraph
- Compute the OOB score for each estimator
- Generate the indices of the sample indices
- Generate sample indices
- Fit the model
- Fit the classifier
- Predict the prediction for each node
- Perform validation on X
- Predict the class for each label
- Predict probabilities for each estimator
- Predict and return the mean and standard deviation
- Compute the standard deviation
- Fit the ensemble
- Returns the number of samples to bootstrap
- Compute the OOB score
- Helper function for parallel build trees
- Compute the decision path
- Compute the log probability for each input X
- Evaluate the model
scikit-garden Key Features
scikit-garden Examples and Code Snippets
Community Discussions
Trending Discussions on scikit-garden
QUESTION
I am using Jupyter on GCP (set up the easy way via the AI Platform) to train a MondrianForestRegressor
from scikit-garden
. My dataset is about 450000 x 300 and training using the machine as-is, even utilising parallelism n_jobs=-1
(32 CPUs, 208GB RAM) is far slower than I would like.
I attached a GPU (2x NVIDIA Tesla T4), restarted the instance and tried again. Training speed seems unaffected by this change.
- Is there something I need to do when training the model in Jupyter to make sure that the GPUs are actually being used?
- Are GPUs even useful for tree-based methods? There is literature which would suggest that they are (https://link.springer.com/chapter/10.1007/978-3-540-88693-8_44), but I don't fully understand the intricacies of what makes a GPU more suitable for different types of algorithms beyond the fact that they deal well with giant matrix calculations e.g. for deep learning.
ANSWER
Answered 2019-Dec-03 at 01:21When creating a Notebook it allocates a GCE VM instance and a GPU, to monitor the GPU you should install the GPU metrics reporting agent on each VM instance that has a GPU attached, this will collect GPU data and sends it to StackDriver Monitoring
Additionally, there are two ways to make use of the GPUs:
High-level Estimator API: No code changes are necessary as long as your ClusterSpec is configured properly. If a cluster is a mixture of CPUs and GPUs, map the ps job name to the CPUs and the worker job name to the GPUs.
Core TensorFlow API: You must assign ops to run on GPU-enabled machines. This process is the same as using GPUs with TensorFlow locally. You can use tf.train.replica_device_setter to assign ops to devices.
Also, here is a lecture about when to use GPU instead of CPU and here you can read a lecture about the performance when using GPU over Tree training
QUESTION
I've started working with quantile random forests (QRFs) from the scikit-garden
package. Previously I was creating regular random forests using RandomForestRegresser
from sklearn.ensemble
.
It appears that the speed of the QRF is comparable to the regular RF with small dataset sizes, but that as the size of the data increases, the QRF becomes MUCH slower at making predictions than the RF.
Is this expected? If so, could someone please explain why it takes such a long time to make these predictions and/or give any suggestions as to how I could get quantile predictions in a more timely manner.
See below for a toy example, where I test the training and predictive times for a variety of dataset sizes.
...ANSWER
Answered 2019-May-28 at 08:18I am not a developer on this or any quantile regression package, but I've looked at the source code for both scikit-garden and quantRegForest/ranger, and I have some idea of why the R versions are so much faster:
EDIT: On a related github issue, lmssdd mentions how this method performs significantly worse than the 'standard procedure' from the paper. I haven't read the paper in detail, so take this answer with a grain of skepticism.
Explanation of difference in skgarden/quantregforest methodsThe basic idea of the skgarden predict
function is to save all the y_train
values corresponding to all of the leaves. Then, when predicting a new sample, you gather the relevant leaves and corresponding y_train
values, and compute the (weighted) quantile of that array. The R versions take a shortcut: they only save a single, randomly chosen y_train
value per leaf node. This has two advantages: it makes the gathering of relevant y_train
values a lot simpler since there is always exactly one value in every leaf node. Secondly, it makes the quantile calculation a lot simpler since every leaf has the exact same weight.
Since you only use a single (random) value per leaf instead of all of them, this is an approximation method. In my experience, if you have enough trees, (at least 50-100 or so), this has very little effect on the result. However, I don't know enough about the math to say how good the approximation is exactly.
TL;DR: how to make skgarden predict fasterBelow is an implementation of the simpler R method of quantile prediction, for a RandomForestQuantileRegressor model. Note that the first half of the function is the (one-time) process of selecting a random y_train value per leaf. If the author were to implement this method in skgarden, they would logically move this part to the fit
method, leaving only the last 6 or so lines, which makes for a much faster predict
method. Also in my example, I am using quantiles from 0 to 1, instead of from 0 to 100.
QUESTION
I'm running joblib in a Flask application living inside a Docker container together with uWSGI (started with threads enabled) which is started by supervisord.
The startup of the webserver shows the following error:
...ANSWER
Answered 2019-Feb-20 at 19:51It seems that semaphoring is not enabled on your image: Joblib checks for multiprocessing.Semaphore()
and it only root have read/write permission on shared memory in /dev/shm
.
Have a look to this question and this answer.
This is run in one of my containers.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install scikit-garden
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page