kandi background
Explore Kits

lucene | lucene实现web项目近实时搜索【solr和NRTManager、SearcherManager】

 by   doushini Java Version: Current License: No License

 by   doushini Java Version: Current License: No License

Download this library from

kandi X-RAY | lucene Summary

lucene is a Java library. lucene has no bugs, it has no vulnerabilities and it has high support. However lucene build file is not available. You can download it from GitHub.
lucene实现web项目近实时搜索【solr和NRTManager、SearcherManager】
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • lucene has a highly active ecosystem.
  • It has 66 star(s) with 69 fork(s). There are 20 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 1 open issues and 0 have been closed. On average issues are closed in 1585 days. There are 1 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of lucene is current.
lucene Support
Best in #Java
Average in #Java
lucene Support
Best in #Java
Average in #Java

quality kandi Quality

  • lucene has 0 bugs and 0 code smells.
lucene Quality
Best in #Java
Average in #Java
lucene Quality
Best in #Java
Average in #Java

securitySecurity

  • lucene has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • lucene code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
lucene Security
Best in #Java
Average in #Java
lucene Security
Best in #Java
Average in #Java

license License

  • lucene does not have a standard license declared.
  • Check the repository for any license declaration and review the terms closely.
  • Without a license, all rights are reserved, and you cannot use the library in your applications.
lucene License
Best in #Java
Average in #Java
lucene License
Best in #Java
Average in #Java

buildReuse

  • lucene releases are not available. You will need to build from source code and install.
  • lucene has no build file. You will be need to create the build yourself to build the component from source.
  • It has 1279 lines of code, 138 functions and 34 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
lucene Reuse
Best in #Java
Average in #Java
lucene Reuse
Best in #Java
Average in #Java
Top functions reviewed by kandi - BETA

kandi has reviewed lucene and discovered the below as its top functions. This is intended to give you an instant insight into lucene implemented functionality, and help decide if they suit your requirements.

  • Find by index .
    • Initialize NRT manager .
      • Convert message object to index field .
        • Search Lucene index .
          • Moves to the previous row .
            • Delete message by id
              • Get the generic parameter type from the given class .
                • Search for a Lucene search .
                  • Delete all temp indices .
                    • Set the servlet request .

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      lucene Key Features

                      lucene实现web项目近实时搜索【solr和NRTManager、SearcherManager】

                      Optimise conditional queries in Azure cognitive search

                      copy iconCopydownload iconDownload
                      (*searchText* AND Privacy:false)
                      
                      (*searchText* AND Privacy:false) OR (UserId:*searchText*)
                      
                      (*searchText* AND Privacy:false)
                      
                      (*searchText* AND Privacy:false) OR (UserId:*searchText*)
                      

                      Hibernate Search returns a string type

                      copy iconCopydownload iconDownload
                      public interface PdfMapRepository extends JpaRepository<PdfMap, Integer> {
                      
                        @Query("select pm.pdfField from PdfMap pf where pf.id = :id")
                        Optional<String> findPdfFieldByPdfMapId(Integer id); 
                      
                      }
                      
                      TypedQuery<String> query = em.createQuery("select pm.pdfField from PdfMap pf where pf.id = :id", String.class);
                      List<String> results = query.getResultList();
                      
                      Session session = sessionFactory.openSession();
                      
                      CriteriaQuery<Double> cr = cb.createQuery(String.class);
                      Root<PdfMap> root = cr.from(PdfMap.class);
                      cr.select(root.get("pdfField"));
                      
                      Query<String> query = session.createQuery(cr);
                      List<String> formFields = query.getResultList();
                      
                      public interface PdfMapRepository extends JpaRepository<PdfMap, Integer> {
                      
                        @Query("select pm.pdfField from PdfMap pf where pf.id = :id")
                        Optional<String> findPdfFieldByPdfMapId(Integer id); 
                      
                      }
                      
                      TypedQuery<String> query = em.createQuery("select pm.pdfField from PdfMap pf where pf.id = :id", String.class);
                      List<String> results = query.getResultList();
                      
                      Session session = sessionFactory.openSession();
                      
                      CriteriaQuery<Double> cr = cb.createQuery(String.class);
                      Root<PdfMap> root = cr.from(PdfMap.class);
                      cr.select(root.get("pdfField"));
                      
                      Query<String> query = session.createQuery(cr);
                      List<String> formFields = query.getResultList();
                      
                      public interface PdfMapRepository extends JpaRepository<PdfMap, Integer> {
                      
                        @Query("select pm.pdfField from PdfMap pf where pf.id = :id")
                        Optional<String> findPdfFieldByPdfMapId(Integer id); 
                      
                      }
                      
                      TypedQuery<String> query = em.createQuery("select pm.pdfField from PdfMap pf where pf.id = :id", String.class);
                      List<String> results = query.getResultList();
                      
                      Session session = sessionFactory.openSession();
                      
                      CriteriaQuery<Double> cr = cb.createQuery(String.class);
                      Root<PdfMap> root = cr.from(PdfMap.class);
                      cr.select(root.get("pdfField"));
                      
                      Query<String> query = session.createQuery(cr);
                      List<String> formFields = query.getResultList();
                      

                      Howto run Nearest Neighbour Search with Lucene HnswGraph

                      copy iconCopydownload iconDownload
                      @Test
                      public void testWriteAndQueryIndex() throws IOException {
                          // Persist and read the data
                          try (MMapDirectory dir = new MMapDirectory(indexPath)) {
                              // Write index
                              int indexedDoc = writeIndex(dir, vectors);
                              // Read index
                              readAndQuery(dir, vectors, indexedDoc);
                          }
                      }
                      
                      Test vectors:
                      0 => [0.13|0.37]
                      1 => [0.99|0.49]
                      2 => [0.98|0.57]
                      3 => [0.23|0.64]
                      4 => [0.72|0.92]
                      5 => [0.08|0.74]
                      6 => [0.50|0.27]
                      7 => [0.97|0.02]
                      8 => [0.90|0.21]
                      9 => [0.89|0.09]
                      10 => [0.11|0.95]
                      
                      Doc Based Search:
                      Searching for NN of [0.98 | 0.01]
                      TotalHits: 11
                      7 => [0.97|0.02]
                      9 => [0.89|0.09]
                      
                      @Test
                      public void testWriteAndQueryIndex() throws IOException {
                          // Persist and read the data
                          try (MMapDirectory dir = new MMapDirectory(indexPath)) {
                              // Write index
                              int indexedDoc = writeIndex(dir, vectors);
                              // Read index
                              readAndQuery(dir, vectors, indexedDoc);
                          }
                      }
                      
                      Test vectors:
                      0 => [0.13|0.37]
                      1 => [0.99|0.49]
                      2 => [0.98|0.57]
                      3 => [0.23|0.64]
                      4 => [0.72|0.92]
                      5 => [0.08|0.74]
                      6 => [0.50|0.27]
                      7 => [0.97|0.02]
                      8 => [0.90|0.21]
                      9 => [0.89|0.09]
                      10 => [0.11|0.95]
                      
                      Doc Based Search:
                      Searching for NN of [0.98 | 0.01]
                      TotalHits: 11
                      7 => [0.97|0.02]
                      9 => [0.89|0.09]
                      
                      try (IndexReader reader = DirectoryReader.open(dir)) {
                          IndexSearcher searcher = new IndexSearcher(reader);
                          System.out.println("Query: [" + String.format("%.2f", queryVector[0]) + ", " + String.format("%.2f", queryVector[1]) + "]");
                          TopDocs results = searcher.search(new KnnVectorQuery("field", queryVector, 3), 10);
                          System.out.println("Hits: " + results.totalHits);
                          for (ScoreDoc sdoc : results.scoreDocs) {
                              Document doc = reader.document(sdoc.doc);
                              StoredField idField = (StoredField) doc.getField("id");
                              System.out.println("Found: " + idField.numericValue() + " = " + String.format("%.1f", sdoc.score));
                          }
                      }
                      

                      Grafana Elasticsearch - Query condition that references field value

                      copy iconCopydownload iconDownload
                      POST _aliases
                      {
                        "actions": [
                          {
                            "add": {
                              "index": "myIndex",
                              "alias": "myAlias",
                              "filter": {
                                "bool": {
                                  "must": [
                                    {
                                      "query_string": {
                                        "query": "documentDate:[now-365d TO now]"
                                      }
                                    },
                                    {
                                      "bool": {
                                        "should": [
                                          {
                                            "script": {
                                              "script": {
                                                "source": "doc['lastAnalysisDate'].value.toInstant().toEpochMilli() >= doc['documentDate'].value.minusYears(1).toInstant().toEpochMilli() && doc['lastAnalysisDate'].value.toInstant().toEpochMilli() <= doc['documentDate'].value.toInstant().toEpochMilli()"
                                              }
                                            }
                                          }
                                        ]
                                      }
                                    }
                                  ]
                                }
                              }
                            }
                          }
                        ]
                      }
                      

                      How to write/serialize lucene's ByteBuffersDirectory to disk?

                      copy iconCopydownload iconDownload
                      final Directory dir = new ByteBuffersDirectory();
                      
                      Directory to = FSDirectory.open(Paths.get(OUT_DIR_PATH));
                      
                      IOContext ctx = new IOContext();
                      for (String file : dir.listAll()) {
                          System.out.println(file); // just for testing
                          to.copyFrom(dir, file, file, ctx);
                      }
                      
                      _0.cfe
                      _0.cfs
                      _0.si
                      segments_1
                      
                      final Directory dir = new ByteBuffersDirectory();
                      
                      Directory to = FSDirectory.open(Paths.get(OUT_DIR_PATH));
                      
                      IOContext ctx = new IOContext();
                      for (String file : dir.listAll()) {
                          System.out.println(file); // just for testing
                          to.copyFrom(dir, file, file, ctx);
                      }
                      
                      _0.cfe
                      _0.cfs
                      _0.si
                      segments_1
                      
                      final Directory dir = new ByteBuffersDirectory();
                      
                      Directory to = FSDirectory.open(Paths.get(OUT_DIR_PATH));
                      
                      IOContext ctx = new IOContext();
                      for (String file : dir.listAll()) {
                          System.out.println(file); // just for testing
                          to.copyFrom(dir, file, file, ctx);
                      }
                      
                      _0.cfe
                      _0.cfs
                      _0.si
                      segments_1
                      
                      final Directory dir = new ByteBuffersDirectory();
                      
                      Directory to = FSDirectory.open(Paths.get(OUT_DIR_PATH));
                      
                      IOContext ctx = new IOContext();
                      for (String file : dir.listAll()) {
                          System.out.println(file); // just for testing
                          to.copyFrom(dir, file, file, ctx);
                      }
                      
                      _0.cfe
                      _0.cfs
                      _0.si
                      segments_1
                      
                          @SneakyThrows
                          public static void copyIndex(ByteBuffersDirectory ramDirectory, Path destination) {
                              FSDirectory fsDirectory = FSDirectory.open(destination);
                              Arrays.stream(ramDirectory.listAll())
                                      .forEach(fileName -> {
                                          try {
                                              // IOContext is null because in fact is not used (at least for the moment)
                                              fsDirectory.copyFrom(ramDirectory, fileName, fileName, null);
                                          } catch (IOException e) {
                                              log.error(e.getMessage(), e);
                                          }
                                      });
                          }
                      

                      Azure search services issue for white space and wildcard search of special characters

                      copy iconCopydownload iconDownload
                      ...
                      {
                        "name": "Summary",
                        "type": "Edm.String",
                        "retrievable": true,
                        "searchable": true,
                        "analyzer": "custom_analyzer_for_tokenizing_as_is"
                      },
                      ...
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      
                      {
                          "name": "FieldName",
                          "type": "Edm.String",
                          "searchable": true,
                          "filterable": true,
                          "retrievable": true,
                          "sortable": true,
                          "facetable": true,
                          "key": false,
                          "indexAnalyzer": null,
                          "searchAnalyzer": null,
                          "analyzer": "specialcharanalyzer",
                          "synonymMaps": []
                      },
                      
                      "analyzers": [
                          {
                              "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
                              "name": "specialcharanalyzer",
                              "tokenizer": "whitespace",
                              "tokenFilters": [
                                  "lowercase"
                              ],
                              "charFilters": []
                          }
                      ],
                      
                      + - & | ! ( ) { } [ ] ^ " ~ * ? : \ /
                      
                      "search": "/.*SearchChar.*/",
                      
                      "search": "/.*$.*/",
                      
                      "search" : "/.*\\escapingcharacter.*/",
                      
                      "search" : "/.*\\+.*/",
                      
                      "search":"/\\**/",
                      

                      Seaching for product codes, phone numbers in Lucene

                      copy iconCopydownload iconDownload
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      
                      import org.apache.lucene.analysis.Analyzer;
                      import org.apache.lucene.analysis.Tokenizer;
                      import org.apache.lucene.analysis.TokenStream;
                      import org.apache.lucene.analysis.core.KeywordTokenizer;
                      import org.apache.lucene.analysis.core.LowerCaseFilter;
                      import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
                      import java.util.Map;
                      import java.util.HashMap;
                      
                      public class IdentifierAnalyzer extends Analyzer {
                      
                          private WordDelimiterGraphFilterFactory getWordDelimiter() {
                              Map<String, String> settings = new HashMap<>();
                              settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
                              settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
                              settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
                              settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
                              settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
                              return new WordDelimiterGraphFilterFactory(settings);
                          }
                      
                          @Override
                          protected TokenStreamComponents createComponents(String fieldName) {
                              Tokenizer tokenizer = new KeywordTokenizer();
                              TokenStream tokenStream = new LowerCaseFilter(tokenizer);
                              tokenStream = getWordDelimiter().create(tokenStream);
                              return new TokenStreamComponents(tokenizer, tokenStream);
                          }
                          
                          @Override
                          protected TokenStream normalize(String fieldName, TokenStream in) {
                              TokenStream tokenStream = new LowerCaseFilter(in);
                              return tokenStream;
                          }
                      
                      }
                      
                      978-3-86680-192-9
                      TS 123
                      123.abc
                      
                      public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
                          final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
                          Analyzer analyzer = new IdentifierAnalyzer();
                          IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
                          iwc.setOpenMode(OpenMode.CREATE);
                          Document doc;
                      
                          List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
                      
                          try (IndexWriter writer = new IndexWriter(dir, iwc)) {
                              for (String identifier : identifiers) {
                                  doc = new Document();
                                  doc.add(new TextField("identifiers", identifier, Field.Store.YES));
                                  writer.addDocument(doc);
                              }
                          }
                      }
                      
                      public static void doSearch() throws IOException, ParseException {
                          Analyzer analyzer = new IdentifierAnalyzer();
                          QueryParser parser = new QueryParser("identifiers", analyzer);
                      
                          List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
                      
                          for (String search : searches) {
                              Query query = parser.parse(search);
                              printHits(query, search);
                          }
                      }
                      
                      private static void printHits(Query query, String search) throws IOException {
                          System.out.println("search term: " + search);
                          System.out.println("parsed query: " + query.toString());
                          IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
                          IndexSearcher searcher = new IndexSearcher(reader);
                          TopDocs results = searcher.search(query, 100);
                          ScoreDoc[] hits = results.scoreDocs;
                          System.out.println("hits: " + hits.length);
                          for (ScoreDoc hit : hits) {
                              System.out.println("");
                              System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
                              Document doc = searcher.doc(hit.doc);
                              System.out.println("  identifier: " + doc.get("identifiers"));
                          }
                          System.out.println("-----------------------------------------");
                      }
                      
                      9783
                      9783*
                      978 3
                      978-3
                      TS1*
                      TS 1*
                      
                      search term: 9783
                      parsed query: identifiers:9783
                      hits: 0
                      
                      search term: TS 1*
                      parsed query: identifiers:ts identifiers:1*
                      hits: 3
                      
                        doc id: 1; score: 1.590861
                        identifier: TS 123
                      
                        doc id: 0; score: 1.0
                        identifier: 978-3-86680-192-9
                      
                        doc id: 2; score: 1.0
                        identifier: 123.abc
                      

                      How to re-index documents with integer id?

                      copy iconCopydownload iconDownload
                      open Lucene.Net.Util
                      
                      let id = doc.GetField("id").GetInt32Value().Value
                      let bytes = BytesRef(NumericUtils.BUF_SIZE_INT32)
                      NumericUtils.Int32ToPrefixCodedBytes(id, 0, bytes)
                      let term = Term("id", bytes)
                      

                      Sort Index using DocValues for integers?

                      copy iconCopydownload iconDownload
                      Sort sort = new Sort(new SortedNumericSortField("number", SortField.Type.LONG, true));
                      TopDocs docs = searcher.search(new MatchAllDocsQuery(), 100, sort);
                      
                        
                      /*
                       * Licensed to the Apache Software Foundation (ASF) under one or more
                       * contributor license agreements.  See the NOTICE file distributed with
                       * this work for additional information regarding copyright ownership.
                       * The ASF licenses this file to You under the Apache License, Version 2.0
                       * (the "License"); you may not use this file except in compliance with
                       * the License.  You may obtain a copy of the License at
                       *
                       *     http://www.apache.org/licenses/LICENSE-2.0
                       *
                       * Unless required by applicable law or agreed to in writing, software
                       * distributed under the License is distributed on an "AS IS" BASIS,
                       * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                       * See the License for the specific language governing permissions and
                       * limitations under the License.
                       */
                      
                       /** Tests sorting on type int */
                        public void testInt() throws IOException {
                          Directory dir = newDirectory();
                          RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
                      
                          Document doc = new Document();
                          doc.add(new NumericDocValuesField("value", 300000));
                          doc.add(newStringField("value", "300000", Field.Store.YES));
                          writer.addDocument(doc);
                          
                          doc = new Document();
                          doc.add(new NumericDocValuesField("value", -1));
                          doc.add(newStringField("value", "-1", Field.Store.YES));
                          writer.addDocument(doc);
                      
                          doc = new Document();
                          doc.add(new NumericDocValuesField("value", 4));
                          doc.add(newStringField("value", "4", Field.Store.YES));
                          writer.addDocument(doc);
                          
                          IndexReader ir = writer.getReader();
                          writer.close();
                          
                          IndexSearcher searcher = newSearcher(ir);
                          Sort sort = new Sort(new SortField("value", SortField.Type.INT));
                      
                          TopDocs td = searcher.search(new MatchAllDocsQuery(), 10, sort);
                          assertEquals(3, td.totalHits.value);
                          // numeric order
                          assertEquals("-1", searcher.doc(td.scoreDocs[0].doc).get("value"));
                          assertEquals("4", searcher.doc(td.scoreDocs[1].doc).get("value"));
                          assertEquals("300000", searcher.doc(td.scoreDocs[2].doc).get("value"));
                      
                          ir.close();
                          dir.close();
                        }
                      
                      public void NumericDocValueSort() {
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                                  Directory indexDir = new RAMDirectory();
                                  IndexWriterConfig iwc = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                      
                                  IndexWriter indexWriter = new IndexWriter(indexDir, iwc);
                      
                                  Document doc = new Document();
                      
                                  doc.Add(new TextField("name", "A1", Field.Store.YES));
                                  //doc.Add(new StoredField("number", 1000L));              //uncomment this line to optionally be able to retrieve it from the doc later, can be  done for every doc
                                  doc.Add(new NumericDocValuesField("number", 1000L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A2", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 1001L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A3", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 990L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A4", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 300L));
                                  indexWriter.AddDocument(doc);
                      
                                  indexWriter.Commit();
                      
                                  IndexReader reader = indexWriter.GetReader(applyAllDeletes: true);
                                  IndexSearcher searcher = new IndexSearcher(reader);
                      
                                  Sort sort;
                                  TopDocs docs;
                                  SortField sortField = new SortField("number", SortFieldType.INT64);
                                  sort = new Sort(sortField);
                      
                                  docs = searcher.Search(new MatchAllDocsQuery(), 1000, sort);
                                  
                      
                                  foreach (ScoreDoc scoreDoc in docs.ScoreDocs) {
                                      Document curDoc = searcher.Doc(scoreDoc.Doc);
                                      string name = curDoc.Get("name");
                                  }
                      
                                  reader.Dispose();               //reader.close() in java
                              }
                      
                        
                      /*
                       * Licensed to the Apache Software Foundation (ASF) under one or more
                       * contributor license agreements.  See the NOTICE file distributed with
                       * this work for additional information regarding copyright ownership.
                       * The ASF licenses this file to You under the Apache License, Version 2.0
                       * (the "License"); you may not use this file except in compliance with
                       * the License.  You may obtain a copy of the License at
                       *
                       *     http://www.apache.org/licenses/LICENSE-2.0
                       *
                       * Unless required by applicable law or agreed to in writing, software
                       * distributed under the License is distributed on an "AS IS" BASIS,
                       * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                       * See the License for the specific language governing permissions and
                       * limitations under the License.
                       */
                      
                       /** Tests sorting on type int */
                        public void testInt() throws IOException {
                          Directory dir = newDirectory();
                          RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
                      
                          Document doc = new Document();
                          doc.add(new NumericDocValuesField("value", 300000));
                          doc.add(newStringField("value", "300000", Field.Store.YES));
                          writer.addDocument(doc);
                          
                          doc = new Document();
                          doc.add(new NumericDocValuesField("value", -1));
                          doc.add(newStringField("value", "-1", Field.Store.YES));
                          writer.addDocument(doc);
                      
                          doc = new Document();
                          doc.add(new NumericDocValuesField("value", 4));
                          doc.add(newStringField("value", "4", Field.Store.YES));
                          writer.addDocument(doc);
                          
                          IndexReader ir = writer.getReader();
                          writer.close();
                          
                          IndexSearcher searcher = newSearcher(ir);
                          Sort sort = new Sort(new SortField("value", SortField.Type.INT));
                      
                          TopDocs td = searcher.search(new MatchAllDocsQuery(), 10, sort);
                          assertEquals(3, td.totalHits.value);
                          // numeric order
                          assertEquals("-1", searcher.doc(td.scoreDocs[0].doc).get("value"));
                          assertEquals("4", searcher.doc(td.scoreDocs[1].doc).get("value"));
                          assertEquals("300000", searcher.doc(td.scoreDocs[2].doc).get("value"));
                      
                          ir.close();
                          dir.close();
                        }
                      
                      public void NumericDocValueSort() {
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                                  Directory indexDir = new RAMDirectory();
                                  IndexWriterConfig iwc = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                      
                                  IndexWriter indexWriter = new IndexWriter(indexDir, iwc);
                      
                                  Document doc = new Document();
                      
                                  doc.Add(new TextField("name", "A1", Field.Store.YES));
                                  //doc.Add(new StoredField("number", 1000L));              //uncomment this line to optionally be able to retrieve it from the doc later, can be  done for every doc
                                  doc.Add(new NumericDocValuesField("number", 1000L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A2", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 1001L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A3", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 990L));
                                  indexWriter.AddDocument(doc);
                      
                                  doc.Fields.Clear();
                                  doc.Add(new TextField("name", "A4", Field.Store.YES));
                                  doc.Add(new NumericDocValuesField("number", 300L));
                                  indexWriter.AddDocument(doc);
                      
                                  indexWriter.Commit();
                      
                                  IndexReader reader = indexWriter.GetReader(applyAllDeletes: true);
                                  IndexSearcher searcher = new IndexSearcher(reader);
                      
                                  Sort sort;
                                  TopDocs docs;
                                  SortField sortField = new SortField("number", SortFieldType.INT64);
                                  sort = new Sort(sortField);
                      
                                  docs = searcher.Search(new MatchAllDocsQuery(), 1000, sort);
                                  
                      
                                  foreach (ScoreDoc scoreDoc in docs.ScoreDocs) {
                                      Document curDoc = searcher.Doc(scoreDoc.Doc);
                                      string name = curDoc.Get("name");
                                  }
                      
                                  reader.Dispose();               //reader.close() in java
                              }
                      

                      How to use HIGH_COMPRESSION in Lucene.Net 4.8

                      copy iconCopydownload iconDownload
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                      public sealed class Lucene41StoredFieldsHighCompressionFormat : CompressingStoredFieldsFormat {
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene41StoredFieldsHighCompressionFormat()
                                  : base("Lucene41StoredFieldsHighCompression", CompressionMode.HIGH_COMPRESSION, 1 << 14) {
                              }
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                          using Lucene40LiveDocsFormat = Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat;
                          using Lucene41StoredFieldsFormat = Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat;
                          using Lucene42NormsFormat = Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat;
                          using Lucene42TermVectorsFormat = Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat;
                          using PerFieldDocValuesFormat = Lucene.Net.Codecs.PerField.PerFieldDocValuesFormat;
                          using PerFieldPostingsFormat = Lucene.Net.Codecs.PerField.PerFieldPostingsFormat;
                      
                          /// <summary>
                          /// Implements the Lucene 4.6 index format, with configurable per-field postings
                          /// and docvalues formats.
                          /// <para/>
                          /// If you want to reuse functionality of this codec in another codec, extend
                          /// <see cref="FilterCodec"/>.
                          /// <para/>
                          /// See <see cref="Lucene.Net.Codecs.Lucene46"/> package documentation for file format details.
                          /// <para/>
                          /// @lucene.experimental 
                          /// </summary>
                          // NOTE: if we make largish changes in a minor release, easier to just make Lucene46Codec or whatever
                          // if they are backwards compatible or smallish we can probably do the backwards in the postingsreader
                          // (it writes a minor version, etc).
                          [CodecName("Lucene46HighCompression")]
                          public class Lucene46HighCompressionCodec : Codec {
                              private readonly StoredFieldsFormat fieldsFormat = new Lucene41StoredFieldsHighCompressionFormat();    //<--This is the only line different then the stock Lucene46Codec
                              private readonly TermVectorsFormat vectorsFormat = new Lucene42TermVectorsFormat();
                              private readonly FieldInfosFormat fieldInfosFormat = new Lucene46FieldInfosFormat();
                              private readonly SegmentInfoFormat segmentInfosFormat = new Lucene46SegmentInfoFormat();
                              private readonly LiveDocsFormat liveDocsFormat = new Lucene40LiveDocsFormat();
                      
                              private readonly PostingsFormat postingsFormat;
                      
                              private class PerFieldPostingsFormatAnonymousInnerClassHelper : PerFieldPostingsFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldPostingsFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override PostingsFormat GetPostingsFormatForField(string field) {
                                      return outerInstance.GetPostingsFormatForField(field);
                                  }
                              }
                      
                              private readonly DocValuesFormat docValuesFormat;
                      
                              private class PerFieldDocValuesFormatAnonymousInnerClassHelper : PerFieldDocValuesFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldDocValuesFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override DocValuesFormat GetDocValuesFormatForField(string field) {
                                      return outerInstance.GetDocValuesFormatForField(field);
                                  }
                              }
                      
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene46HighCompressionCodec()
                                  : base() {
                                  postingsFormat = new PerFieldPostingsFormatAnonymousInnerClassHelper(this);
                                  docValuesFormat = new PerFieldDocValuesFormatAnonymousInnerClassHelper(this);
                              }
                      
                              public override sealed StoredFieldsFormat StoredFieldsFormat => fieldsFormat;
                      
                              public override sealed TermVectorsFormat TermVectorsFormat => vectorsFormat;
                      
                              public override sealed PostingsFormat PostingsFormat => postingsFormat;
                      
                              public override sealed FieldInfosFormat FieldInfosFormat => fieldInfosFormat;
                      
                              public override sealed SegmentInfoFormat SegmentInfoFormat => segmentInfosFormat;
                      
                              public override sealed LiveDocsFormat LiveDocsFormat => liveDocsFormat;
                      
                              /// <summary>
                              /// Returns the postings format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene41"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual PostingsFormat GetPostingsFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultFormat == null) {
                                      defaultFormat = Lucene.Net.Codecs.PostingsFormat.ForName("Lucene41");
                                  }
                                  return defaultFormat;
                              }
                      
                              /// <summary>
                              /// Returns the docvalues format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene45"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual DocValuesFormat GetDocValuesFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultDVFormat == null) {
                                      defaultDVFormat = Lucene.Net.Codecs.DocValuesFormat.ForName("Lucene45");
                                  }
                                  return defaultDVFormat;
                              }
                      
                              public override sealed DocValuesFormat DocValuesFormat => docValuesFormat;
                      
                              // LUCENENET specific - lazy initialize the codecs to ensure we get the correct type if overridden.
                              private PostingsFormat defaultFormat;
                              private DocValuesFormat defaultDVFormat;
                      
                              private readonly NormsFormat normsFormat = new Lucene42NormsFormat();
                      
                              public override sealed NormsFormat NormsFormat => normsFormat;
                          }
                      
                      Codec.SetCodecFactory(new DefaultCodecFactory {
                          CustomCodecTypes = new Type[] { typeof(Lucene46HighCompressionCodec) }
                      });
                      
                      public class TestCompression {
                      
                      
                              [Fact]
                              public void HighCompression() {
                                  FxTest.Setup();
                      
                                  Directory indexDir = new RAMDirectory();
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                      
                                  IndexWriterConfig indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                                  indexConfig.Codec = new Lucene46HighCompressionCodec();     //<--------Install the High Compression codec.
                      
                                  indexConfig.UseCompoundFile = true;
                      
                                  IndexWriter writer = new IndexWriter(indexDir, indexConfig);
                      
                                  //souce: https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00006/src/Lucene.Net/Search/SearcherFactory.cs
                                  SearcherManager searcherManager = new SearcherManager(writer, applyAllDeletes: true, new SearchWarmer());
                      
                                  Document doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "001", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Unique gifts are great gifts.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "002", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Everyone is gifted.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "003", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Gifts are meant to be shared.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  writer.Commit();
                      
                                  searcherManager.MaybeRefreshBlocking();
                                  IndexSearcher indexSearcher = searcherManager.Acquire();
                                  try {
                                      QueryParser parser = new QueryParser(LuceneVersion.LUCENE_48, "exampleField", standardAnalyzer);
                                      Query query = parser.Parse("everyone");
                      
                                      TopDocs topDocs = indexSearcher.Search(query, int.MaxValue);
                      
                                      int numMatchingDocs = topDocs.ScoreDocs.Length;
                                      Assert.Equal(1, numMatchingDocs);
                      
                      
                                      Document docRead = indexSearcher.Doc(topDocs.ScoreDocs[0].Doc);
                                      string primaryKey = docRead.Get("examplePrimaryKey");
                                      Assert.Equal("002", primaryKey);
                      
                                  } finally {
                                      searcherManager.Release(indexSearcher);
                                  }
                      
                              }
                      
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                      public sealed class Lucene41StoredFieldsHighCompressionFormat : CompressingStoredFieldsFormat {
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene41StoredFieldsHighCompressionFormat()
                                  : base("Lucene41StoredFieldsHighCompression", CompressionMode.HIGH_COMPRESSION, 1 << 14) {
                              }
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                          using Lucene40LiveDocsFormat = Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat;
                          using Lucene41StoredFieldsFormat = Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat;
                          using Lucene42NormsFormat = Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat;
                          using Lucene42TermVectorsFormat = Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat;
                          using PerFieldDocValuesFormat = Lucene.Net.Codecs.PerField.PerFieldDocValuesFormat;
                          using PerFieldPostingsFormat = Lucene.Net.Codecs.PerField.PerFieldPostingsFormat;
                      
                          /// <summary>
                          /// Implements the Lucene 4.6 index format, with configurable per-field postings
                          /// and docvalues formats.
                          /// <para/>
                          /// If you want to reuse functionality of this codec in another codec, extend
                          /// <see cref="FilterCodec"/>.
                          /// <para/>
                          /// See <see cref="Lucene.Net.Codecs.Lucene46"/> package documentation for file format details.
                          /// <para/>
                          /// @lucene.experimental 
                          /// </summary>
                          // NOTE: if we make largish changes in a minor release, easier to just make Lucene46Codec or whatever
                          // if they are backwards compatible or smallish we can probably do the backwards in the postingsreader
                          // (it writes a minor version, etc).
                          [CodecName("Lucene46HighCompression")]
                          public class Lucene46HighCompressionCodec : Codec {
                              private readonly StoredFieldsFormat fieldsFormat = new Lucene41StoredFieldsHighCompressionFormat();    //<--This is the only line different then the stock Lucene46Codec
                              private readonly TermVectorsFormat vectorsFormat = new Lucene42TermVectorsFormat();
                              private readonly FieldInfosFormat fieldInfosFormat = new Lucene46FieldInfosFormat();
                              private readonly SegmentInfoFormat segmentInfosFormat = new Lucene46SegmentInfoFormat();
                              private readonly LiveDocsFormat liveDocsFormat = new Lucene40LiveDocsFormat();
                      
                              private readonly PostingsFormat postingsFormat;
                      
                              private class PerFieldPostingsFormatAnonymousInnerClassHelper : PerFieldPostingsFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldPostingsFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override PostingsFormat GetPostingsFormatForField(string field) {
                                      return outerInstance.GetPostingsFormatForField(field);
                                  }
                              }
                      
                              private readonly DocValuesFormat docValuesFormat;
                      
                              private class PerFieldDocValuesFormatAnonymousInnerClassHelper : PerFieldDocValuesFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldDocValuesFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override DocValuesFormat GetDocValuesFormatForField(string field) {
                                      return outerInstance.GetDocValuesFormatForField(field);
                                  }
                              }
                      
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene46HighCompressionCodec()
                                  : base() {
                                  postingsFormat = new PerFieldPostingsFormatAnonymousInnerClassHelper(this);
                                  docValuesFormat = new PerFieldDocValuesFormatAnonymousInnerClassHelper(this);
                              }
                      
                              public override sealed StoredFieldsFormat StoredFieldsFormat => fieldsFormat;
                      
                              public override sealed TermVectorsFormat TermVectorsFormat => vectorsFormat;
                      
                              public override sealed PostingsFormat PostingsFormat => postingsFormat;
                      
                              public override sealed FieldInfosFormat FieldInfosFormat => fieldInfosFormat;
                      
                              public override sealed SegmentInfoFormat SegmentInfoFormat => segmentInfosFormat;
                      
                              public override sealed LiveDocsFormat LiveDocsFormat => liveDocsFormat;
                      
                              /// <summary>
                              /// Returns the postings format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene41"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual PostingsFormat GetPostingsFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultFormat == null) {
                                      defaultFormat = Lucene.Net.Codecs.PostingsFormat.ForName("Lucene41");
                                  }
                                  return defaultFormat;
                              }
                      
                              /// <summary>
                              /// Returns the docvalues format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene45"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual DocValuesFormat GetDocValuesFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultDVFormat == null) {
                                      defaultDVFormat = Lucene.Net.Codecs.DocValuesFormat.ForName("Lucene45");
                                  }
                                  return defaultDVFormat;
                              }
                      
                              public override sealed DocValuesFormat DocValuesFormat => docValuesFormat;
                      
                              // LUCENENET specific - lazy initialize the codecs to ensure we get the correct type if overridden.
                              private PostingsFormat defaultFormat;
                              private DocValuesFormat defaultDVFormat;
                      
                              private readonly NormsFormat normsFormat = new Lucene42NormsFormat();
                      
                              public override sealed NormsFormat NormsFormat => normsFormat;
                          }
                      
                      Codec.SetCodecFactory(new DefaultCodecFactory {
                          CustomCodecTypes = new Type[] { typeof(Lucene46HighCompressionCodec) }
                      });
                      
                      public class TestCompression {
                      
                      
                              [Fact]
                              public void HighCompression() {
                                  FxTest.Setup();
                      
                                  Directory indexDir = new RAMDirectory();
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                      
                                  IndexWriterConfig indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                                  indexConfig.Codec = new Lucene46HighCompressionCodec();     //<--------Install the High Compression codec.
                      
                                  indexConfig.UseCompoundFile = true;
                      
                                  IndexWriter writer = new IndexWriter(indexDir, indexConfig);
                      
                                  //souce: https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00006/src/Lucene.Net/Search/SearcherFactory.cs
                                  SearcherManager searcherManager = new SearcherManager(writer, applyAllDeletes: true, new SearchWarmer());
                      
                                  Document doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "001", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Unique gifts are great gifts.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "002", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Everyone is gifted.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "003", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Gifts are meant to be shared.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  writer.Commit();
                      
                                  searcherManager.MaybeRefreshBlocking();
                                  IndexSearcher indexSearcher = searcherManager.Acquire();
                                  try {
                                      QueryParser parser = new QueryParser(LuceneVersion.LUCENE_48, "exampleField", standardAnalyzer);
                                      Query query = parser.Parse("everyone");
                      
                                      TopDocs topDocs = indexSearcher.Search(query, int.MaxValue);
                      
                                      int numMatchingDocs = topDocs.ScoreDocs.Length;
                                      Assert.Equal(1, numMatchingDocs);
                      
                      
                                      Document docRead = indexSearcher.Doc(topDocs.ScoreDocs[0].Doc);
                                      string primaryKey = docRead.Get("examplePrimaryKey");
                                      Assert.Equal("002", primaryKey);
                      
                                  } finally {
                                      searcherManager.Release(indexSearcher);
                                  }
                      
                              }
                      
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                      public sealed class Lucene41StoredFieldsHighCompressionFormat : CompressingStoredFieldsFormat {
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene41StoredFieldsHighCompressionFormat()
                                  : base("Lucene41StoredFieldsHighCompression", CompressionMode.HIGH_COMPRESSION, 1 << 14) {
                              }
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                          using Lucene40LiveDocsFormat = Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat;
                          using Lucene41StoredFieldsFormat = Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat;
                          using Lucene42NormsFormat = Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat;
                          using Lucene42TermVectorsFormat = Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat;
                          using PerFieldDocValuesFormat = Lucene.Net.Codecs.PerField.PerFieldDocValuesFormat;
                          using PerFieldPostingsFormat = Lucene.Net.Codecs.PerField.PerFieldPostingsFormat;
                      
                          /// <summary>
                          /// Implements the Lucene 4.6 index format, with configurable per-field postings
                          /// and docvalues formats.
                          /// <para/>
                          /// If you want to reuse functionality of this codec in another codec, extend
                          /// <see cref="FilterCodec"/>.
                          /// <para/>
                          /// See <see cref="Lucene.Net.Codecs.Lucene46"/> package documentation for file format details.
                          /// <para/>
                          /// @lucene.experimental 
                          /// </summary>
                          // NOTE: if we make largish changes in a minor release, easier to just make Lucene46Codec or whatever
                          // if they are backwards compatible or smallish we can probably do the backwards in the postingsreader
                          // (it writes a minor version, etc).
                          [CodecName("Lucene46HighCompression")]
                          public class Lucene46HighCompressionCodec : Codec {
                              private readonly StoredFieldsFormat fieldsFormat = new Lucene41StoredFieldsHighCompressionFormat();    //<--This is the only line different then the stock Lucene46Codec
                              private readonly TermVectorsFormat vectorsFormat = new Lucene42TermVectorsFormat();
                              private readonly FieldInfosFormat fieldInfosFormat = new Lucene46FieldInfosFormat();
                              private readonly SegmentInfoFormat segmentInfosFormat = new Lucene46SegmentInfoFormat();
                              private readonly LiveDocsFormat liveDocsFormat = new Lucene40LiveDocsFormat();
                      
                              private readonly PostingsFormat postingsFormat;
                      
                              private class PerFieldPostingsFormatAnonymousInnerClassHelper : PerFieldPostingsFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldPostingsFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override PostingsFormat GetPostingsFormatForField(string field) {
                                      return outerInstance.GetPostingsFormatForField(field);
                                  }
                              }
                      
                              private readonly DocValuesFormat docValuesFormat;
                      
                              private class PerFieldDocValuesFormatAnonymousInnerClassHelper : PerFieldDocValuesFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldDocValuesFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override DocValuesFormat GetDocValuesFormatForField(string field) {
                                      return outerInstance.GetDocValuesFormatForField(field);
                                  }
                              }
                      
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene46HighCompressionCodec()
                                  : base() {
                                  postingsFormat = new PerFieldPostingsFormatAnonymousInnerClassHelper(this);
                                  docValuesFormat = new PerFieldDocValuesFormatAnonymousInnerClassHelper(this);
                              }
                      
                              public override sealed StoredFieldsFormat StoredFieldsFormat => fieldsFormat;
                      
                              public override sealed TermVectorsFormat TermVectorsFormat => vectorsFormat;
                      
                              public override sealed PostingsFormat PostingsFormat => postingsFormat;
                      
                              public override sealed FieldInfosFormat FieldInfosFormat => fieldInfosFormat;
                      
                              public override sealed SegmentInfoFormat SegmentInfoFormat => segmentInfosFormat;
                      
                              public override sealed LiveDocsFormat LiveDocsFormat => liveDocsFormat;
                      
                              /// <summary>
                              /// Returns the postings format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene41"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual PostingsFormat GetPostingsFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultFormat == null) {
                                      defaultFormat = Lucene.Net.Codecs.PostingsFormat.ForName("Lucene41");
                                  }
                                  return defaultFormat;
                              }
                      
                              /// <summary>
                              /// Returns the docvalues format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene45"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual DocValuesFormat GetDocValuesFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultDVFormat == null) {
                                      defaultDVFormat = Lucene.Net.Codecs.DocValuesFormat.ForName("Lucene45");
                                  }
                                  return defaultDVFormat;
                              }
                      
                              public override sealed DocValuesFormat DocValuesFormat => docValuesFormat;
                      
                              // LUCENENET specific - lazy initialize the codecs to ensure we get the correct type if overridden.
                              private PostingsFormat defaultFormat;
                              private DocValuesFormat defaultDVFormat;
                      
                              private readonly NormsFormat normsFormat = new Lucene42NormsFormat();
                      
                              public override sealed NormsFormat NormsFormat => normsFormat;
                          }
                      
                      Codec.SetCodecFactory(new DefaultCodecFactory {
                          CustomCodecTypes = new Type[] { typeof(Lucene46HighCompressionCodec) }
                      });
                      
                      public class TestCompression {
                      
                      
                              [Fact]
                              public void HighCompression() {
                                  FxTest.Setup();
                      
                                  Directory indexDir = new RAMDirectory();
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                      
                                  IndexWriterConfig indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                                  indexConfig.Codec = new Lucene46HighCompressionCodec();     //<--------Install the High Compression codec.
                      
                                  indexConfig.UseCompoundFile = true;
                      
                                  IndexWriter writer = new IndexWriter(indexDir, indexConfig);
                      
                                  //souce: https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00006/src/Lucene.Net/Search/SearcherFactory.cs
                                  SearcherManager searcherManager = new SearcherManager(writer, applyAllDeletes: true, new SearchWarmer());
                      
                                  Document doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "001", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Unique gifts are great gifts.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "002", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Everyone is gifted.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "003", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Gifts are meant to be shared.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  writer.Commit();
                      
                                  searcherManager.MaybeRefreshBlocking();
                                  IndexSearcher indexSearcher = searcherManager.Acquire();
                                  try {
                                      QueryParser parser = new QueryParser(LuceneVersion.LUCENE_48, "exampleField", standardAnalyzer);
                                      Query query = parser.Parse("everyone");
                      
                                      TopDocs topDocs = indexSearcher.Search(query, int.MaxValue);
                      
                                      int numMatchingDocs = topDocs.ScoreDocs.Length;
                                      Assert.Equal(1, numMatchingDocs);
                      
                      
                                      Document docRead = indexSearcher.Doc(topDocs.ScoreDocs[0].Doc);
                                      string primaryKey = docRead.Get("examplePrimaryKey");
                                      Assert.Equal("002", primaryKey);
                      
                                  } finally {
                                      searcherManager.Release(indexSearcher);
                                  }
                      
                              }
                      
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                      public sealed class Lucene41StoredFieldsHighCompressionFormat : CompressingStoredFieldsFormat {
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene41StoredFieldsHighCompressionFormat()
                                  : base("Lucene41StoredFieldsHighCompression", CompressionMode.HIGH_COMPRESSION, 1 << 14) {
                              }
                          }
                      
                      /*
                           * Licensed to the Apache Software Foundation (ASF) under one or more
                           * contributor license agreements.  See the NOTICE file distributed with
                           * this work for additional information regarding copyright ownership.
                           * The ASF licenses this file to You under the Apache License, Version 2.0
                           * (the "License"); you may not use this file except in compliance with
                           * the License.  You may obtain a copy of the License at
                           *
                           *     http://www.apache.org/licenses/LICENSE-2.0
                           *
                           * Unless required by applicable law or agreed to in writing, software
                           * distributed under the License is distributed on an "AS IS" BASIS,
                           * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                           * See the License for the specific language governing permissions and
                           * limitations under the License.
                           */
                      
                          using Lucene40LiveDocsFormat = Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat;
                          using Lucene41StoredFieldsFormat = Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat;
                          using Lucene42NormsFormat = Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat;
                          using Lucene42TermVectorsFormat = Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat;
                          using PerFieldDocValuesFormat = Lucene.Net.Codecs.PerField.PerFieldDocValuesFormat;
                          using PerFieldPostingsFormat = Lucene.Net.Codecs.PerField.PerFieldPostingsFormat;
                      
                          /// <summary>
                          /// Implements the Lucene 4.6 index format, with configurable per-field postings
                          /// and docvalues formats.
                          /// <para/>
                          /// If you want to reuse functionality of this codec in another codec, extend
                          /// <see cref="FilterCodec"/>.
                          /// <para/>
                          /// See <see cref="Lucene.Net.Codecs.Lucene46"/> package documentation for file format details.
                          /// <para/>
                          /// @lucene.experimental 
                          /// </summary>
                          // NOTE: if we make largish changes in a minor release, easier to just make Lucene46Codec or whatever
                          // if they are backwards compatible or smallish we can probably do the backwards in the postingsreader
                          // (it writes a minor version, etc).
                          [CodecName("Lucene46HighCompression")]
                          public class Lucene46HighCompressionCodec : Codec {
                              private readonly StoredFieldsFormat fieldsFormat = new Lucene41StoredFieldsHighCompressionFormat();    //<--This is the only line different then the stock Lucene46Codec
                              private readonly TermVectorsFormat vectorsFormat = new Lucene42TermVectorsFormat();
                              private readonly FieldInfosFormat fieldInfosFormat = new Lucene46FieldInfosFormat();
                              private readonly SegmentInfoFormat segmentInfosFormat = new Lucene46SegmentInfoFormat();
                              private readonly LiveDocsFormat liveDocsFormat = new Lucene40LiveDocsFormat();
                      
                              private readonly PostingsFormat postingsFormat;
                      
                              private class PerFieldPostingsFormatAnonymousInnerClassHelper : PerFieldPostingsFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldPostingsFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override PostingsFormat GetPostingsFormatForField(string field) {
                                      return outerInstance.GetPostingsFormatForField(field);
                                  }
                              }
                      
                              private readonly DocValuesFormat docValuesFormat;
                      
                              private class PerFieldDocValuesFormatAnonymousInnerClassHelper : PerFieldDocValuesFormat {
                                  private readonly Lucene46HighCompressionCodec outerInstance;
                      
                                  public PerFieldDocValuesFormatAnonymousInnerClassHelper(Lucene46HighCompressionCodec outerInstance) {
                                      this.outerInstance = outerInstance;
                                  }
                      
                                  [MethodImpl(MethodImplOptions.AggressiveInlining)]
                                  public override DocValuesFormat GetDocValuesFormatForField(string field) {
                                      return outerInstance.GetDocValuesFormatForField(field);
                                  }
                              }
                      
                              /// <summary>
                              /// Sole constructor. </summary>
                              public Lucene46HighCompressionCodec()
                                  : base() {
                                  postingsFormat = new PerFieldPostingsFormatAnonymousInnerClassHelper(this);
                                  docValuesFormat = new PerFieldDocValuesFormatAnonymousInnerClassHelper(this);
                              }
                      
                              public override sealed StoredFieldsFormat StoredFieldsFormat => fieldsFormat;
                      
                              public override sealed TermVectorsFormat TermVectorsFormat => vectorsFormat;
                      
                              public override sealed PostingsFormat PostingsFormat => postingsFormat;
                      
                              public override sealed FieldInfosFormat FieldInfosFormat => fieldInfosFormat;
                      
                              public override sealed SegmentInfoFormat SegmentInfoFormat => segmentInfosFormat;
                      
                              public override sealed LiveDocsFormat LiveDocsFormat => liveDocsFormat;
                      
                              /// <summary>
                              /// Returns the postings format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene41"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual PostingsFormat GetPostingsFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultFormat == null) {
                                      defaultFormat = Lucene.Net.Codecs.PostingsFormat.ForName("Lucene41");
                                  }
                                  return defaultFormat;
                              }
                      
                              /// <summary>
                              /// Returns the docvalues format that should be used for writing
                              /// new segments of <paramref name="field"/>.
                              /// <para/>
                              /// The default implementation always returns "Lucene45"
                              /// </summary>
                              [MethodImpl(MethodImplOptions.AggressiveInlining)]
                              public virtual DocValuesFormat GetDocValuesFormatForField(string field) {
                                  // LUCENENET specific - lazy initialize the codec to ensure we get the correct type if overridden.
                                  if (defaultDVFormat == null) {
                                      defaultDVFormat = Lucene.Net.Codecs.DocValuesFormat.ForName("Lucene45");
                                  }
                                  return defaultDVFormat;
                              }
                      
                              public override sealed DocValuesFormat DocValuesFormat => docValuesFormat;
                      
                              // LUCENENET specific - lazy initialize the codecs to ensure we get the correct type if overridden.
                              private PostingsFormat defaultFormat;
                              private DocValuesFormat defaultDVFormat;
                      
                              private readonly NormsFormat normsFormat = new Lucene42NormsFormat();
                      
                              public override sealed NormsFormat NormsFormat => normsFormat;
                          }
                      
                      Codec.SetCodecFactory(new DefaultCodecFactory {
                          CustomCodecTypes = new Type[] { typeof(Lucene46HighCompressionCodec) }
                      });
                      
                      public class TestCompression {
                      
                      
                              [Fact]
                              public void HighCompression() {
                                  FxTest.Setup();
                      
                                  Directory indexDir = new RAMDirectory();
                      
                                  Analyzer standardAnalyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48);
                      
                                  IndexWriterConfig indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, standardAnalyzer);
                                  indexConfig.Codec = new Lucene46HighCompressionCodec();     //<--------Install the High Compression codec.
                      
                                  indexConfig.UseCompoundFile = true;
                      
                                  IndexWriter writer = new IndexWriter(indexDir, indexConfig);
                      
                                  //souce: https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00006/src/Lucene.Net/Search/SearcherFactory.cs
                                  SearcherManager searcherManager = new SearcherManager(writer, applyAllDeletes: true, new SearchWarmer());
                      
                                  Document doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "001", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Unique gifts are great gifts.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "002", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Everyone is gifted.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  doc = new Document();
                                  doc.Add(new StringField("examplePrimaryKey", "003", Field.Store.YES));
                                  doc.Add(new TextField("exampleField", "Gifts are meant to be shared.", Field.Store.YES));
                                  writer.AddDocument(doc);
                      
                                  writer.Commit();
                      
                                  searcherManager.MaybeRefreshBlocking();
                                  IndexSearcher indexSearcher = searcherManager.Acquire();
                                  try {
                                      QueryParser parser = new QueryParser(LuceneVersion.LUCENE_48, "exampleField", standardAnalyzer);
                                      Query query = parser.Parse("everyone");
                      
                                      TopDocs topDocs = indexSearcher.Search(query, int.MaxValue);
                      
                                      int numMatchingDocs = topDocs.ScoreDocs.Length;
                                      Assert.Equal(1, numMatchingDocs);
                      
                      
                                      Document docRead = indexSearcher.Doc(topDocs.ScoreDocs[0].Doc);
                                      string primaryKey = docRead.Get("examplePrimaryKey");
                                      Assert.Equal("002", primaryKey);
                      
                                  } finally {
                                      searcherManager.Release(indexSearcher);
                                  }
                      
                              }
                      
                          }
                      

                      Community Discussions

                      Trending Discussions on lucene
                      • SonarQube Docker Installation CorruptIndexException: checksum failed
                      • Solr corrupt index exception
                      • Optimise conditional queries in Azure cognitive search
                      • JanusGraph Java unable to add vertex/edge
                      • Hibernate Search returns a string type
                      • Howto run Nearest Neighbour Search with Lucene HnswGraph
                      • Read Lucene FSDirectory from WebApi
                      • Grafana Elasticsearch - Query condition that references field value
                      • How to write/serialize lucene's ByteBuffersDirectory to disk?
                      • Opengrok cannot escape anchors on full search
                      Trending Discussions on lucene

                      QUESTION

                      SonarQube Docker Installation CorruptIndexException: checksum failed

                      Asked 2022-Mar-31 at 08:20

                      I'm trying to create docker container with SonarQube inside it, but I get this error while composing for the first time:

                      Caused by: java.util.concurrent.ExecutionException: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=f736ed01 actual=298dcde2 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_7w.fdt")))
                      

                      I tried installing it on a fresh instance with fresh docker installation, I even tried to install it on a different server to rule out hardware failure, and I still get the same error. What could be the cause of it?

                      docker-compose.yml

                      version: "3"
                      
                      services:
                        sonarqube:
                          image: sonarqube:community
                          depends_on:
                            - db
                          environment:
                            SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar
                            SONAR_JDBC_USERNAME: sonar
                            SONAR_JDBC_PASSWORD: sonar
                          volumes:
                            - sonarqube_data:/opt/sonarqube/data
                            - sonarqube_extensions:/opt/sonarqube/extensions
                            - sonarqube_logs:/opt/sonarqube/logs
                          ports:
                            - "9000:9000"
                        db:
                          image: postgres:12
                          environment:
                            POSTGRES_USER: sonar
                            POSTGRES_PASSWORD: sonar
                          volumes:
                            - postgresql:/var/lib/postgresql
                            - postgresql_data:/var/lib/postgresql/data
                      
                      volumes:
                        sonarqube_data:
                        sonarqube_extensions:
                        sonarqube_logs:
                        postgresql:
                        postgresql_data:
                      

                      ANSWER

                      Answered 2022-Mar-31 at 08:20

                      Solved it by using image: sonarqube:9.2.4-developer

                      Source https://stackoverflow.com/questions/71679132

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install lucene

                      You can download it from GitHub.
                      You can use lucene like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the lucene component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Share this Page

                      share link
                      Consider Popular Java Libraries
                      Try Top Libraries by doushini
                      Compare Java Libraries with Highest Support
                      Compare Java Libraries with Highest Quality
                      Compare Java Libraries with Highest Security
                      Compare Java Libraries with Permissive License
                      Compare Java Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.