spark-salesforce | Spark data source for Salesforce | REST library

 by   springml Scala Version: Current License: Apache-2.0

kandi X-RAY | spark-salesforce Summary

kandi X-RAY | spark-salesforce Summary

spark-salesforce is a Scala library typically used in Web Services, REST applications. spark-salesforce has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

A library for connecting Spark with Salesforce and Salesforce Wave.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              spark-salesforce has a low active ecosystem.
              It has 55 star(s) with 44 fork(s). There are 9 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 24 open issues and 21 have been closed. On average issues are closed in 107 days. There are 5 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of spark-salesforce is current.

            kandi-Quality Quality

              spark-salesforce has 0 bugs and 9 code smells.

            kandi-Security Security

              spark-salesforce has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              spark-salesforce code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              spark-salesforce is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              spark-salesforce releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.
              It has 1772 lines of code, 80 functions and 15 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of spark-salesforce
            Get all kandi verified functions for this library.

            spark-salesforce Key Features

            No Key Features are available at this moment for spark-salesforce.

            spark-salesforce Examples and Code Snippets

            Scala API
            Scaladot img1Lines of Code : 56dot img1License : Permissive (Apache-2.0)
            copy iconCopy
            // Writing Dataset
            // Using spark-csv package to load dataframes
            val df = spark.
                            read.
                            format("com.databricks.spark.csv").
                            option("header", "true").
                            load("your_csv_location")
            df.
               writ  
            Java API
            Scaladot img2Lines of Code : 52dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            // Writing Dataset
            DataFrame df = spark
                                .read()
                                .format("com.databricks.spark.csv")
                                .option("header", "true")
                                .load("your_csv_location");
            df.write()
                  .format("com.sp  
            Spark Salesforce Library,Metadata Configuration
            Scaladot img3Lines of Code : 36dot img3License : Permissive (Apache-2.0)
            copy iconCopy
            
            {
              "": {
              "wave_type": "",
              "precision": "",
              "scale": "",
              "format": "",
              "defaultValue": ""
              }
            }
            
            
            {
              "float": {
              "wave_type": "Numeric",
              "precision": "10",
              "scale": "2",
              "format": "##0.00",
              "defaultValue": "0.00"
              }
            }
            
            // Defaul  

            Community Discussions

            QUESTION

            SpringML-Salesforce, cannot create xmlstreamreader from org.codehaus.stax2.io.Stax2
            Asked 2021-Apr-20 at 12:20

            I'm using https://github.com/springml/spark-salesforce to query against a salesforce api. It works fine for standard queries, but when I add the bulk options they've listed it hits the error I've listed below. Let me know if I'm making any basic mistakes, based on their documentation I believe this is the correct approach

            Trying to use a bulk query against our API. Using the below SOQL statement

            ...

            ANSWER

            Answered 2021-Apr-20 at 12:20

            This is a problem with stax2 librery add woodstox-core-asl-4.4.1.jar file in dependet jars in glue job configurarion and it will sove this error.

            Source https://stackoverflow.com/questions/67063848

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install spark-salesforce

            You can download it from GitHub.

            Support

            bulk: (Optional) Flag to enable bulk query. This is the preferred method when loading large sets of data. Salesforce will process batches in the background. Default value is false.pkChunking: (Optional) Flag to enable automatic primary key chunking for bulk query job. This splits bulk queries into separate batches that of the size defined by chunkSize option. By default false and the default chunk size is 100,000.chunkSize: (Optional) The size of the number of records to include in each batch. Default value is 100,000. This option can only be used when pkChunking is true. Maximum size is 250,000.timeout: (Optional) The maximum time spent polling for the completion of bulk query job. This option can only be used when bulk is true.maxCharsPerColumn(Optional) The maximum length of a column. This option can only be used when bulk is true. Default value is 4096.externalIdFieldName: (Optional) The name of the field used as the external ID for Salesforce Object. This value is only used when doing an update or upsert. Default "Id".queryAll: (Optional) Toggle to retrieve deleted and archived records for SOQL queries. Default value is false.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/springml/spark-salesforce.git

          • CLI

            gh repo clone springml/spark-salesforce

          • sshUrl

            git@github.com:springml/spark-salesforce.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link