kandi background
Explore Kits

super-csv-annotation | 'Super CSV ' extention library for annotation | CSV Processing library

 by   mygreen Java Version: Current License: Apache-2.0

 by   mygreen Java Version: Current License: Apache-2.0

Download this library from

kandi X-RAY | super-csv-annotation Summary

super-csv-annotation is a Java library typically used in Utilities, CSV Processing applications. super-csv-annotation has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub, Maven.
This library 'Super CSV' extension library with annotation function. + this library automatic building for CellProcessor from Annotation with JavaBean. + and simply showing localized messages.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • super-csv-annotation has a low active ecosystem.
  • It has 27 star(s) with 5 fork(s). There are 2 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 2 open issues and 25 have been closed. On average issues are closed in 80 days. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of super-csv-annotation is current.
super-csv-annotation Support
Best in #CSV Processing
Average in #CSV Processing
super-csv-annotation Support
Best in #CSV Processing
Average in #CSV Processing

quality kandi Quality

  • super-csv-annotation has 0 bugs and 0 code smells.
super-csv-annotation Quality
Best in #CSV Processing
Average in #CSV Processing
super-csv-annotation Quality
Best in #CSV Processing
Average in #CSV Processing

securitySecurity

  • super-csv-annotation has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • super-csv-annotation code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
super-csv-annotation Security
Best in #CSV Processing
Average in #CSV Processing
super-csv-annotation Security
Best in #CSV Processing
Average in #CSV Processing

license License

  • super-csv-annotation is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
super-csv-annotation License
Best in #CSV Processing
Average in #CSV Processing
super-csv-annotation License
Best in #CSV Processing
Average in #CSV Processing

buildReuse

  • super-csv-annotation releases are not available. You will need to build from source code and install.
  • Deployable package is available in Maven.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
super-csv-annotation Reuse
Best in #CSV Processing
Average in #CSV Processing
super-csv-annotation Reuse
Best in #CSV Processing
Average in #CSV Processing
Top functions reviewed by kandi - BETA

kandi has reviewed super-csv-annotation and discovered the below as its top functions. This is intended to give you an instant insight into super-csv-annotation implemented functionality, and help decide if they suit your requirements.

  • Pad a string to a given length
    • Create a new ResourceBundle .
      • Parse the message .
        • Find override attribute .
          • Writes an object to the csv .
            • Reads a single row .
              • Add callback methods .
                • Generate a list of column mapping column names .
                  • Process constraint violations .
                    • Trim off left alignment .

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      super-csv-annotation Key Features

                      'Super CSV' extention library for annotation

                      super-csv-annotation Examples and Code Snippets

                      See all related Code Snippets

                      Depends

                      copy iconCopydownload iconDownload
                      + Java1.8
                          - (SuperCSV2.x is Java1.6+, but this library require Java1.8)
                      + SuperCSV 2.4+
                      
                      # Setup
                      
                      1. Add dependency for Super Csv Annotation
                          ```xml
                          <dependency>
                              <groupId>com.github.mygreen</groupId>
                              <artifactId>super-csv-annotation</artifactId>
                              <version>2.2</version>
                          </dependency>
                          ```
                      2. Add dependency for Logging library. Example Log4j.
                          ```xml
                          <dependency>
                              <groupId>org.slf4j</groupId>
                              <artifactId>slf4j-log4j12</artifactId>
                              <version>1.7.1</version>
                          </dependency>
                          <dependency>
                              <groupId>log4j</groupId>
                              <artifactId>log4j</artifactId>
                              <version>1.2.14</version>
                          </dependency>
                          ```
                      
                      # Build
                      
                      1. Setup Java SE 8 (1.8.0_121+)
                      2. Setup Maven
                      3. Setup Sphinx (building for manual)
                          1. install Python
                          2. install sphinx and theme for read the docs, janome
                          ```console
                          # pip install sphinx
                          # pip install sphinx_rtd_theme --upgrade
                          # pip install janome
                          ```
                      4. Build with Maven
                          1. make jar files.
                          ```console
                          # mvn clean package
                          ```
                          2. generate site.
                          ```console
                          # mvn site -Dgpg.skip=true
                          ```
                      
                      # Document
                      - Project infomation
                        - http://mygreen.github.io/super-csv-annotation/index.html
                      - Manual
                        - http://mygreen.github.io/super-csv-annotation/sphinx/index.html
                      - Javadoc
                        - http://mygreen.github.io/super-csv-annotation/apidocs/index.html
                        - http://javadoc.io/doc/com.github.mygreen/super-csv-annotation/

                      See all related Code Snippets

                      Community Discussions

                      Trending Discussions on CSV Processing
                      • Peformance issues reading CSV files in a Java (Spring Boot) application
                      • Inserting json column in Bigquery
                      • Avoid repeated checks in loop
                      • golang syscall, locked to thread
                      • How to break up a string into a vector fast?
                      • CSV Regex skipping first comma
                      Trending Discussions on CSV Processing

                      QUESTION

                      Peformance issues reading CSV files in a Java (Spring Boot) application

                      Asked 2022-Jan-29 at 12:37

                      I am currently working on a spring based API which has to transform csv data and to expose them as json. it has to read big CSV files which will contain more than 500 columns and 2.5 millions lines each. I am not guaranteed to have the same header between files (each file can have a completly different header than another), so I have no way to create a dedicated class which would provide mapping with the CSV headers. Currently the api controller is calling a csv service which reads the CSV data using a BufferReader.

                      The code works fine on my local machine but it is very slow : it takes about 20 seconds to process 450 columns and 40 000 lines. To improve speed processing, I tried to implement multithreading with Callable(s) but I am not familiar with that kind of concept, so the implementation might be wrong.

                      Other than that the api is running out of heap memory when running on the server, I know that a solution would be to enhance the amount of available memory but I suspect that the replace() and split() operations on strings made in the Callable(s) are responsible for consuming a large amout of heap memory.

                      So I actually have several questions :

                      #1. How could I improve the speed of the CSV reading ?

                      #2. Is the multithread implementation with Callable correct ?

                      #3. How could I reduce the amount of heap memory used in the process ?

                      #4. Do you know of a different approach to split at comas and replace the double quotes in each CSV line ? Would StringBuilder be of any healp here ? What about StringTokenizer ?

                      Here below the CSV method

                        public static final int NUMBER_OF_THREADS = 10;
                      
                         public static List<List<String>> readCsv(InputStream inputStream) {
                                  List<List<String>> rowList = new ArrayList<>();
                                  ExecutorService pool = Executors.newFixedThreadPool(NUMBER_OF_THREADS);
                                  List<Future<List<String>>> listOfFutures = new ArrayList<>();
                                  try {
                                          BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
                                          String line = null;
                                          while ((line = reader.readLine()) != null) {
                                                  CallableLineReader callableLineReader = new CallableLineReader(line);
                                                  Future<List<String>> futureCounterResult = pool.submit(callableLineReader);
                                                  listOfFutures.add(futureCounterResult);
                                          }
                                          reader.close();
                                          pool.shutdown();
                                  } catch (Exception e) {
                                          //log Error reading csv file
                                  }
                      
                                  for (Future<List<String>> future : listOfFutures) {
                                          try {
                                                  List<String> row = future.get();
                                          }
                                          catch ( ExecutionException | InterruptedException e) {
                                                  //log Error CSV processing interrupted during execution
                                          }
                                  }
                      
                                  return rowList;
                          }
                      

                      And the Callable implementation

                      public class CallableLineReader implements Callable<List<String>>  {
                      
                              private final String line;
                      
                              public CallableLineReader(String line) {
                                      this.line = line;
                              }
                      
                              @Override
                              public List<String> call() throws Exception {
                                      return Arrays.asList(line.replace("\"", "").split(","));
                              }
                      }
                      

                      ANSWER

                      Answered 2022-Jan-29 at 02:56

                      I don't think that splitting this work onto multiple threads is going to provide much improvement, and may in fact make the problem worse by consuming even more memory. The main problem is using too much heap memory, and the performance problem is likely to be due to excessive garbage collection when the remaining available heap is very small (but it's best to measure and profile to determine the exact cause of performance problems).

                      The memory consumption would be less from the replace and split operations, and more from the fact that the entire contents of the file need to be read into memory in this approach. Each line may not consume much memory, but multiplied by millions of lines, it all adds up.

                      If you have enough memory available on the machine to assign a heap size large enough to hold the entire contents, that will be the simplest solution, as it won't require changing the code.

                      Otherwise, the best way to deal with large amounts of data in a bounded amount of memory is to use a streaming approach. This means that each line of the file is processed and then passed directly to the output, without collecting all of the lines in memory in between. This will require changing the method signature to use a return type other than List. Assuming you are using Java 8 or later, the Stream API can be very helpful. You could rewrite the method like this:

                      public static Stream<List<String>> readCsv(InputStream inputStream) {
                          BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
                          return reader.lines().map(line -> Arrays.asList(line.replace("\"", "").split(",")));
                      }
                      

                      Note that this throws unchecked exceptions in case of an I/O error.

                      This will read and transform each line of input as needed by the caller of the method, and will allow previous lines to be garbage collected if they are no longer referenced. This then requires that the caller of this method also consume the data line by line, which can be tricky when generating JSON. The JakartaEE JsonGenerator API offers one possible approach. If you need help with this part of it, please open a new question including details of how you're currently generating JSON.

                      Source https://stackoverflow.com/questions/70900587

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install super-csv-annotation

                      You can download it from GitHub, Maven.
                      You can use super-csv-annotation like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the super-csv-annotation component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Explore Related Topics

                      Share this Page

                      share link
                      Consider Popular CSV Processing Libraries
                      Try Top Libraries by mygreen
                      Compare CSV Processing Libraries with Highest Support
                      Compare CSV Processing Libraries with Highest Quality
                      Compare CSV Processing Libraries with Highest Security
                      Compare CSV Processing Libraries with Permissive License
                      Compare CSV Processing Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.