mongo-deep-mapreduce | Use Hadoop MapReduce directly on Mongo data
kandi X-RAY | mongo-deep-mapreduce Summary
kandi X-RAY | mongo-deep-mapreduce Summary
mongo-deep-mapreduce is a Java library typically used in MongoDB applications. mongo-deep-mapreduce has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub, Maven.
This is a library of MongoDB related Hadoop MapReduce classes, in particular an InputFormat that reads directly from Mongo’s binary on-disk format. Developed by Peter Bakkum at Groupon in Palo Alto. Problem: If you want to use Hadoop MapReduce with a Mongo collection you currently have two options: - You can execute one or more cursors over the entire cluster in your MapReduce job. - You can export the collection as BSON or JSON, which also executes a cursor over the entire collection, and MapReduce over the exported data. However, with a large data set that significantly exceeds the available memory on the Mongo host, these options can both be prohibitively time consuming. Solution: Move the raw Mongo files into HDFS, without exporting, and MapReduce over them using this library. Mongo uses a proprietary binary format to manage its data, which is essentially a doubly-linked list of BSON records. By reading this format directly, we obviate the need for expensive data conversion prior to a Hadoop MapReduce, and we can utilize the full throughput of the Hadoop cluster when reading the data, rather than using single-threaded cursors.
This is a library of MongoDB related Hadoop MapReduce classes, in particular an InputFormat that reads directly from Mongo’s binary on-disk format. Developed by Peter Bakkum at Groupon in Palo Alto. Problem: If you want to use Hadoop MapReduce with a Mongo collection you currently have two options: - You can execute one or more cursors over the entire cluster in your MapReduce job. - You can export the collection as BSON or JSON, which also executes a cursor over the entire collection, and MapReduce over the exported data. However, with a large data set that significantly exceeds the available memory on the Mongo host, these options can both be prohibitively time consuming. Solution: Move the raw Mongo files into HDFS, without exporting, and MapReduce over them using this library. Mongo uses a proprietary binary format to manage its data, which is essentially a doubly-linked list of BSON records. By reading this format directly, we obviate the need for expensive data conversion prior to a Hadoop MapReduce, and we can utilize the full throughput of the Hadoop cluster when reading the data, rather than using single-threaded cursors.
Support
Quality
Security
License
Reuse
Support
mongo-deep-mapreduce has a low active ecosystem.
It has 29 star(s) with 17 fork(s). There are 11 watchers for this library.
It had no major release in the last 6 months.
There are 0 open issues and 1 have been closed. On average issues are closed in 16 days. There are no pull requests.
It has a neutral sentiment in the developer community.
The latest version of mongo-deep-mapreduce is current.
Quality
mongo-deep-mapreduce has 0 bugs and 0 code smells.
Security
mongo-deep-mapreduce has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
mongo-deep-mapreduce code analysis shows 0 unresolved vulnerabilities.
There are 0 security hotspots that need review.
License
mongo-deep-mapreduce is licensed under the BSD-3-Clause License. This license is Permissive.
Permissive licenses have the least restrictions, and you can use them in most projects.
Reuse
mongo-deep-mapreduce releases are not available. You will need to build from source code and install.
Deployable package is available in Maven.
Build file is available. You can build the component from source.
Installation instructions are not available. Examples and code snippets are available.
Top functions reviewed by kandi - BETA
kandi has reviewed mongo-deep-mapreduce and discovered the below as its top functions. This is intended to give you an instant insight into mongo-deep-mapreduce implemented functionality, and help decide if they suit your requirements.
- Compares two input splits
- Compares this object to another
- Compares two extent objects
- Gets the input splits
- Gets splits from a file
- Returns an iterator over the extent that iterates over all the extent in this directory
- Deserialize the fields
- Read a string from a DataInput
- Custom deserialization method
- Entry point for the hive table
- Set Hadoop configuration
- Serialize a MongoInputSplit into a binary representation
- This method writes the file to the output
- Write a MapWritable to a MongoDB operation
- Returns a Java object equivalent to the given writable value
- Iterates over the list of directories
- Returns an iterator over all records in this file system
- Reads a BSONObject from the given DataInput
- Read an int from a byte array at a given offset
- Read a file
- Writes the given WritableObject to the given MongoJob object
- Gets the locations
- Compares two SON objects
- Starts MongoDB job
- Starts MongoDB command
Get all kandi verified functions for this library.
mongo-deep-mapreduce Key Features
No Key Features are available at this moment for mongo-deep-mapreduce.
mongo-deep-mapreduce Examples and Code Snippets
No Code Snippets are available at this moment for mongo-deep-mapreduce.
Community Discussions
No Community Discussions are available at this moment for mongo-deep-mapreduce.Refer to stack overflow page for discussions.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mongo-deep-mapreduce
You can download it from GitHub, Maven.
You can use mongo-deep-mapreduce like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the mongo-deep-mapreduce component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
You can use mongo-deep-mapreduce like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the mongo-deep-mapreduce component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
For any new features, suggestions and bugs create an issue on GitHub.
If you have any questions check and ask questions on community page Stack Overflow .
Find more information at:
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page