dmlc-core | common bricks library for building
kandi X-RAY | dmlc-core Summary
kandi X-RAY | dmlc-core Summary
[GitHub license] DMLC-Core is the backbone library to support all DMLC projects, offers the bricks to build efficient and scalable distributed machine learning libraries. Developer Channel [Join the chat at
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of dmlc-core
dmlc-core Key Features
dmlc-core Examples and Code Snippets
Community Discussions
Trending Discussions on dmlc-core
QUESTION
I'm trying to use XGBoost to predict a one target (one attribute) dataframe. Below my code. I run it on Colab
...ANSWER
Answered 2021-Jun-05 at 03:28XGBoost cannot handle categorical variables, so they need to be encoded before passing to XGBoost model. There are many ways you can encode your varaibles according to the nature of the categorical variable. Since I believe that your string have some order so Label Encoding is suited for your categorical variables:
Full code:
QUESTION
I want to install the XG-Boost which I want to use in the C++ in win10 .
And I use the command followed (in the git bash):
...ANSWER
Answered 2019-Nov-19 at 14:09All it ok , the MinGW-W64 can be upgraded to the lastest (for me is the 8.1.0)to solve problems easily .
You can download MinGW-W64 here MinGW-W64 download link
And you can use the release_0.90 of the xgboost()xgboost link
Use the followed code
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
git checkout release_0.90
alias make='mingw32-make'
cd dmlc-core
make
cd ../rabit
make lib/librabit_empty.a
cd ..
cp make/mingw64.mk config.mk
make
After that , your can find the xgboost.dll in "Your path\xgboost\lib\". Try to link it ! And another thing you need to do is to include "xgboost\include\c_api.h"
for example:
g++ 1.cpp -o b -std=c++11 -IE:\xgboost\include -LE:\xgboost -llib\xgboost
QUESTION
Im trying to set up a custom version of XGBoost from https://github.com/robjhyndman/M4metalearning in R.
When I run devtools::install_github("pmontman/customxgboost")
I get this error:
ANSWER
Answered 2019-Apr-25 at 06:13Finally I fix the problem, im going to describe the entire process.
In essence you have to follow this steps:
https://xgboost.readthedocs.io/en/latest/build.html
Specifically this is important: brew install gcc@8
This command download the version of gcc that supports openMP an important library to XGBoost due to the nature of some parallel characteristics.
Maybe the shared library xgboost.so
fails due to a "symbol not found error". This occurs when your are trying to link objects compiled from different gcc versions.
More info here: What does "Symbol not found / Expected in: flat namespace" actually mean?
To fix this I changed the contents of ~/.R/Makevars
file to :
CC=/usr/local/bin/gcc-8
CXX=/usr/local/bin/g++-8
CXX11=/usr/local/bin/g++-8
Note CX11 is different from original question version.
This solved my problem.
More info about Makevars
file here : Understanding the contents of the Makevars file in R (macros, variables, ~/.R/Makevars and pkg/src/Makevars)
If the above fails it is recommended to delete the R libraries and reinstall it, there may be a problem with a symbolic link or similar.
R libraries (Mac Os High Sierra) are stored here:
/Library/Frameworks/R.framework/Versions/3.5/Resources/library/
QUESTION
I'm trying to add external project as a library to my project using ExternalProject_Add:
...ANSWER
Answered 2019-Apr-15 at 16:24I'll post the final CMakeLists.txt
to include xgboost into your project, it might be useful for someone, the solution to the problem above is to create directories during cmake
configure phase (NOTE: I'm using OSX to build that, so you would need to use liblibxgboost.so
for GNU/Linux instead of liblibxgboost.dylib
):
QUESTION
I have a block of code that is supposed to build a RNN model with 5 lag variables for an observation of time series data. Here is the code:
...ANSWER
Answered 2018-Mar-09 at 19:28Most probably the issue is with the data you receive from Quandl or how you are processing it.
NAs stays in the array after na.trim()
, if NA is in the middle. Maybe it causes a failure of shapes matching in some situations. I would recommend to look into the state of the input once you see the fail again.
Otherwise, after adding a few extra required callbacks, your code is valid. Here it is with the parameters added inline and using synthetic data:
QUESTION
While trying to train a lenet model for multiclass classification using h2o deepwater using mxnet backed I am getting the following errors:
Loading H2O mxnet bindings.
Found CUDA_HOME or CUDA_PATH environment variable, trying to connect to GPU devices.
Loading CUDA library.
Loading mxnet library.
Loading H2O mxnet bindings.
Done loading H2O mxnet bindings.
Constructing model.
Done constructing model.
Building network.
mxnet data input shape: (32,100)
[10:40:16] /home/jenkins/slave_dir_from_mr-0xb1/workspace/deepwater-master/thirdparty/mxnet/dmlc-core/include/dmlc/logging.h:235: [10:40:16] src/operator/./convolution-inl.h:349: Check failed: (dshape.ndim()) == (4) Input data should be 4D in batch-num_filter-y-x
[10:40:16] src/symbol.cxx:189: Check failed: (MXSymbolInferShape(GetHandle(), keys.size(), keys.data(), arg_ind_ptr.data(), arg_shape_data.data(), &in_shape_size, &in_shape_ndim, &in_shape_data, &out_shape_size, &out_shape_ndim, &out_shape_data, &aux_shape_size, &aux_shape_ndim, &aux_shape_data, &complete)) == (0)
The details of my setup :
* Ubuntu : 16.04
* Ram : 12gb
* Graphics card : Nvidia 920mx driver version : 384.90
* Cuda : 8.0.61
* cudnn : 6.0
* R version : 3.4.3
* H2o version : 3.15.0.393 & h2o-R package : 3.16.0.2
* mxnet : 0.11.0
* Train data size : 400mb (when converting to the h2o frame object it comes around 822mb)
Things I have done :
1.) Gave enough memory to java heap while running h2o cluster (java -Xmx9g -jar h2o.jar)
2.) Build the mxnet from source for gpu
3.) Monitored the gpu and system via nvidia-smi and system monitor. At no point do they eat up all the ram to show "out of memory" issue. I still will be having around 2-3gb free before the error shows up
4.) Have tried with tensorflow-gpu(build from source). Checking the pip list made sure that its installed but during model creation in R it gives the error :
Error: java.lang.RuntimeException: Unable to initialize the native Deep Learning backend: null
5.) The only method I got it the h2o deepwater to work with all the backend and w/wo GPU is through docker setup provided in the installation tutorials.
I wanted the same functionality on my laptop instead of using Docker. Also is there any way to run deepwater using just CPU? The link Is it possible to build Deep Water/TensorFlow model in H2O without CUDA doesn't provide any helpful answers. Any help or advice will be greatly appreciated!
...ANSWER
Answered 2018-Mar-02 at 01:34As evident from the error logs and from documentation of mxnet.sym.Convolution
your data needs to be in [batch, channels, height, width]
format. However it looks like your data contains only two dimensions (based on this log: mxnet data input shape: (32,100)
). Reformatting the data, even including two dimensions of size 1 such that your input shape is (1,1,32,100) should resolve this issue.
QUESTION
I'm trying to install xgboost
R package on my linux server using:
ANSWER
Answered 2018-Feb-23 at 22:22One way making a newer version of gcc
and g++
appear ready for R is to force it to appear early in the system $PATH
. Besides altering $PATH
, one can use the fact that /usr/local/bin/
normally comes before /usr/bin
so that an added newer version will be preferred.
So assumming we installed a new gcc
in /opt/gcc/gcc-4.9.2/
we could do
QUESTION
I am trying to install XGBoost on a EC2 instance and continually get the following error after trying "pip install xgboost":
...ANSWER
Answered 2017-Nov-15 at 18:48You are missing the g++
compiler.
You did not mention which Linux that you are running.
Amazon Linux:
yum install make glibc-devel gcc patch
QUESTION
I am having trouble installing mxnet GPU for R on Amazon deep learning linux AMI. The environment variables are such a mess that it’s a nightmare for any non-expert sys-admin to figure out.
Step 1: install the ridiculous amount of missing/broken programs and R packages
...ANSWER
Answered 2017-Oct-25 at 23:58Did you try the following when running any sudo commands.
QUESTION
I would like to train a neural network whilst utilising all 4 GPU's on my g2.8xarge EC2 instance using MXNet. I am using the following AWS Deep Learning Linux community AMI:
Deep Learning AMI Amazon Linux - 3.3_Oct2017 - ami-999844e0)
As per these instructions, when I connect to the instance I switch to keras v1 with the MXNet backend by issuing this command:
...ANSWER
Answered 2017-Oct-25 at 18:50This is because the Keras Conda environment has a dependency on mxnet cpu pip package. You can install the gpu version inside the Conda environment with:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dmlc-core
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page