SZT-bigdata | Shenzhen Metro Big Data Passenger Flow Analysis System๐Ÿš‡๐Ÿš„๐ŸŒŸ

ย by ย  geekyouth Scala Version: v0.13 License: Non-SPDX

kandi X-RAY | SZT-bigdata Summary

SZT-bigdata is a Scala library typically used in Big Data, Kafka, Spark, Hadoop applications. SZT-bigdata has no bugs, it has no vulnerabilities and it has medium support. However SZT-bigdata has a Non-SPDX License. You can download it from GitHub.
Shenzhen Metro Big Data Passenger Flow Analysis System๐Ÿš‡๐Ÿš„๐ŸŒŸ
    Support
      Quality
        Security
          License
            Reuse
            Support
              Quality
                Security
                  License
                    Reuse

                      kandi-support Support

                        summary
                        SZT-bigdata has a medium active ecosystem.
                        summary
                        It has 1776 star(s) with 550 fork(s). There are 60 watchers for this library.
                        summary
                        It had no major release in the last 12 months.
                        summary
                        There are 12 open issues and 7 have been closed. On average issues are closed in 5 days. There are 3 open pull requests and 0 closed requests.
                        summary
                        It has a neutral sentiment in the developer community.
                        summary
                        The latest version of SZT-bigdata is v0.13
                        SZT-bigdata Support
                          Best in #Scala
                            Average in #Scala
                            SZT-bigdata Support
                              Best in #Scala
                                Average in #Scala

                                  kandi-Quality Quality

                                    summary
                                    SZT-bigdata has no bugs reported.
                                    SZT-bigdata Quality
                                      Best in #Scala
                                        Average in #Scala
                                        SZT-bigdata Quality
                                          Best in #Scala
                                            Average in #Scala

                                              kandi-Security Security

                                                summary
                                                SZT-bigdata has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                SZT-bigdata Security
                                                  Best in #Scala
                                                    Average in #Scala
                                                    SZT-bigdata Security
                                                      Best in #Scala
                                                        Average in #Scala

                                                          kandi-License License

                                                            summary
                                                            SZT-bigdata has a Non-SPDX License.
                                                            summary
                                                            Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
                                                            SZT-bigdata License
                                                              Best in #Scala
                                                                Average in #Scala
                                                                SZT-bigdata License
                                                                  Best in #Scala
                                                                    Average in #Scala

                                                                      kandi-Reuse Reuse

                                                                        summary
                                                                        SZT-bigdata releases are available to install and integrate.
                                                                        summary
                                                                        Installation instructions are not available. Examples and code snippets are available.
                                                                        SZT-bigdata Reuse
                                                                          Best in #Scala
                                                                            Average in #Scala
                                                                            SZT-bigdata Reuse
                                                                              Best in #Scala
                                                                                Average in #Scala
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
                                                                                  Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
                                                                                  Get all kandi verified functions for this library.
                                                                                  Get all kandi verified functions for this library.

                                                                                  SZT-bigdata Key Features

                                                                                  Shenzhen Metro Big Data Passenger Flow Analysis System๐Ÿš‡๐Ÿš„๐ŸŒŸ

                                                                                  SZT-bigdata Examples and Code Snippets

                                                                                  No Code Snippets are available at this moment for SZT-bigdata.
                                                                                  Community Discussions

                                                                                  Trending Discussions on Big Data

                                                                                  How to group unassociated content
                                                                                  chevron right
                                                                                  Using Spark window with more than one partition when there is no obvious partitioning column
                                                                                  chevron right
                                                                                  What is the best way to store +3 millions records in Firestore?
                                                                                  chevron right
                                                                                  spark-shell throws java.lang.reflect.InvocationTargetException on running
                                                                                  chevron right
                                                                                  For function over multiple rows (i+1)?
                                                                                  chevron right
                                                                                  Filling up shuffle buffer (this may take a while)
                                                                                  chevron right
                                                                                  Designing Twitter Search - How to sort large datasets?
                                                                                  chevron right
                                                                                  Unnest Query optimisation for singular record
                                                                                  chevron right
                                                                                  handling million of rows for lookup operation using python
                                                                                  chevron right
                                                                                  split function does not return any observations with large dataset
                                                                                  chevron right

                                                                                  QUESTION

                                                                                  How to group unassociated content
                                                                                  Asked 2022-Apr-15 at 12:43

                                                                                  I have a hive table that records user behavior

                                                                                  like this

                                                                                  userid behavior timestamp url 1 view 1650022601 url1 1 click 1650022602 url2 1 click 1650022614 url3 1 view 1650022617 url4 1 click 1650022622 url5 1 view 1650022626 url7 2 view 1650022628 url8 2 view 1650022631 url9

                                                                                  About 400GB is added to the table every day.

                                                                                  I want to order by timestamp asc, then one 'view' is in a group between another 'view' like this table, the first 3 lines belong to a same group , then subtract the timestamps, like 1650022614 - 1650022601 as the view time.

                                                                                  How to do this?

                                                                                  i try lag and lead function, or scala like this

                                                                                          val pairRDD: RDD[(Int, String)] = record.map(x => {
                                                                                              if (StringUtil.isDateString(x.split("\\s+")(0))) {
                                                                                                  partition = partition + 1
                                                                                                  (partition, x)
                                                                                              } else {
                                                                                                  (partition, x)
                                                                                              }
                                                                                          })
                                                                                  

                                                                                  or java like this

                                                                                          LongAccumulator part = spark.sparkContext().longAccumulator("part");
                                                                                  
                                                                                          JavaPairRDD pairRDD = spark.sql(sql).coalesce(1).javaRDD().mapToPair((PairFunction) row -> {
                                                                                              if (row.getAs("event") == "pageview") {
                                                                                                  part.add(1L);
                                                                                              }
                                                                                          return new Tuple2<>(part.value(), row);
                                                                                          });
                                                                                  

                                                                                  but when a dataset is very large, this code just stupid.

                                                                                  save me plz

                                                                                  ANSWER

                                                                                  Answered 2022-Apr-15 at 12:43

                                                                                  If you use dataframe, you can build partition by using window that sum a column whose value is 1 when you change partition and 0 if you don't change partition.

                                                                                  You can transform a RDD to a dataframe with sparkSession.createDataframe() method as explained in this answer

                                                                                  Back to your problem. In you case, you change partition every time column behavior is equal to "view". So we can start with this condition:

                                                                                  import org.apache.spark.sql.functions.col
                                                                                  
                                                                                  val df1 = df.withColumn("is_view", (col("behavior") === "view").cast("integer"))
                                                                                  

                                                                                  You get the following dataframe:

                                                                                  +------+--------+----------+----+-------+
                                                                                  |userid|behavior|timestamp |url |is_view|
                                                                                  +------+--------+----------+----+-------+
                                                                                  |1     |view    |1650022601|url1|1      |
                                                                                  |1     |click   |1650022602|url2|0      |
                                                                                  |1     |click   |1650022614|url3|0      |
                                                                                  |1     |view    |1650022617|url4|1      |
                                                                                  |1     |click   |1650022622|url5|0      |
                                                                                  |1     |view    |1650022626|url7|1      |
                                                                                  |2     |view    |1650022628|url8|1      |
                                                                                  |2     |view    |1650022631|url9|1      |
                                                                                  +------+--------+----------+----+-------+
                                                                                  

                                                                                  Then you use a window ordered by timestamp to sum over the is_view column:

                                                                                  import org.apache.spark.sql.expressions.Window
                                                                                  import org.apache.spark.sql.functions.sum
                                                                                  
                                                                                  val df2 = df1.withColumn("partition", sum("is_view").over(Window.partitionBy("userid").orderBy("timestamp")))
                                                                                  

                                                                                  Which get you the following dataframe:

                                                                                  +------+--------+----------+----+-------+---------+
                                                                                  |userid|behavior|timestamp |url |is_view|partition|
                                                                                  +------+--------+----------+----+-------+---------+
                                                                                  |1     |view    |1650022601|url1|1      |1        |
                                                                                  |1     |click   |1650022602|url2|0      |1        |
                                                                                  |1     |click   |1650022614|url3|0      |1        |
                                                                                  |1     |view    |1650022617|url4|1      |2        |
                                                                                  |1     |click   |1650022622|url5|0      |2        |
                                                                                  |1     |view    |1650022626|url7|1      |3        |
                                                                                  |2     |view    |1650022628|url8|1      |1        |
                                                                                  |2     |view    |1650022631|url9|1      |2        |
                                                                                  +------+--------+----------+----+-------+---------+
                                                                                  

                                                                                  Then, you just have to aggregate per userid and partition:

                                                                                  import org.apache.spark.sql.functions.{max, min}
                                                                                  
                                                                                  val result = df2.groupBy("userid", "partition")
                                                                                    .agg((max("timestamp") - min("timestamp")).as("duration"))
                                                                                  

                                                                                  And you get the following results:

                                                                                  +------+---------+--------+
                                                                                  |userid|partition|duration|
                                                                                  +------+---------+--------+
                                                                                  |1     |1        |13      |
                                                                                  |1     |2        |5       |
                                                                                  |1     |3        |0       |
                                                                                  |2     |1        |0       |
                                                                                  |2     |2        |0       |
                                                                                  +------+---------+--------+
                                                                                  

                                                                                  The complete scala code:

                                                                                  import org.apache.spark.sql.expressions.Window
                                                                                  import org.apache.spark.sql.functions.{col, max, min, sum}
                                                                                  
                                                                                  val result = df
                                                                                    .withColumn("is_view", (col("behavior") === "view").cast("integer"))
                                                                                    .withColumn("partition", sum("is_view").over(Window.partitionBy("userid").orderBy("timestamp")))
                                                                                    .groupBy("userid", "partition")
                                                                                    .agg((max("timestamp") - min("timestamp")).as("duration"))
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/71883786

                                                                                  QUESTION

                                                                                  Using Spark window with more than one partition when there is no obvious partitioning column
                                                                                  Asked 2022-Apr-10 at 20:21

                                                                                  Here is the scenario. Assuming I have the following table:

                                                                                  identifier line 51169081604 2 00034886044 22 51168939455 52

                                                                                  The challenge is to, for every single column line, select the next biggest column line, which I have accomplished by the following SQL:

                                                                                  SELECT i1.line,i1.identifier, 
                                                                                  MAX(i1.line) OVER (
                                                                                      ORDER BY i1.line ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING
                                                                                  )AS parent
                                                                                  FROM global_temp.documentIdentifiers i1
                                                                                  

                                                                                  The challenge is partially solved alright, the problem is, when I execute this code on Spark, the performance is terrible. The warning message is very clear about it:

                                                                                  No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.

                                                                                  Partitioning by any of the two fields does not work, it breaks the result, of course, as every created partition is not aware of the other lines.

                                                                                  Does anyone have any clue on how can I " select the next biggest column line" without performance issues?

                                                                                  Thanks

                                                                                  ANSWER

                                                                                  Answered 2022-Apr-10 at 20:21

                                                                                  Using your "next" approach AND assuming the data is generated in ascending line order, the following does work in parallel, but if actually faster you can tell me; I do not know your volume of data. In any event you cannot solve just with SQL (%sql).

                                                                                  Here goes:

                                                                                  import org.apache.spark.sql.functions._
                                                                                  import org.apache.spark.sql.expressions.Window
                                                                                  import spark.implicits._
                                                                                  
                                                                                  case class X(identifier: Long, line: Long) // Too hard to explain, just gets around issues with df --> rdd --> df.
                                                                                  
                                                                                  // Gen some more data.
                                                                                  val df = Seq(
                                                                                   (1000000, 23), (1200, 56), (1201, 58), (1202, 60),
                                                                                   (8200, 63), (890000, 67), (990000, 99), (33000, 123),
                                                                                   (33001, 124), (33002, 126), (33009, 132), (33019, 133),
                                                                                   (33029, 134), (33039, 135), (800, 201), (1800, 999),
                                                                                   (1801, 1999), (1802, 2999), (1800444, 9999)
                                                                                   ).toDF("identifier", "line")
                                                                                  
                                                                                  // Add partition so as to be able to apply parallelism - except for upper boundary record.
                                                                                  val df2 = df.as[X]
                                                                                              .rdd
                                                                                              .mapPartitionsWithIndex((index, iter) => {
                                                                                                  iter.map(x => (index, x ))   
                                                                                               }).mapValues(v => (v.identifier, v.line)).map(x => (x._1, x._2._1, x._2._2))
                                                                                              .toDF("part", "identifier", "line")
                                                                                  
                                                                                  // Process per partition.
                                                                                  @transient val w = org.apache.spark.sql.expressions.Window.partitionBy("part").orderBy("line")  
                                                                                  val df3 = df2.withColumn("next", lead("line", 1, null).over(w))
                                                                                  
                                                                                  // Process upper boundary.
                                                                                  val df4 = df3.filter(df3("part") =!= 0).groupBy("part").agg(min("line").as("nxt")).toDF("pt", "nxt")
                                                                                  val df5 = df3.join(df4, (df3("part") === df4("pt") - 1), "outer" )
                                                                                  val df6 = df5.withColumn("next", when(col("next").isNull, col("nxt")).otherwise(col("next"))).select("identifier", "line", "next")
                                                                                  
                                                                                  // Display. Sort accordingly.
                                                                                  df6.show(false)
                                                                                  

                                                                                  returns:

                                                                                  +----------+----+----+
                                                                                  |identifier|line|next|
                                                                                  +----------+----+----+
                                                                                  |1000000   |23  |56  |
                                                                                  |1200      |56  |58  |
                                                                                  |1201      |58  |60  |
                                                                                  |1202      |60  |63  |
                                                                                  |8200      |63  |67  |
                                                                                  |890000    |67  |99  |
                                                                                  |990000    |99  |123 |
                                                                                  |33000     |123 |124 |
                                                                                  |33001     |124 |126 |
                                                                                  |33002     |126 |132 |
                                                                                  |33009     |132 |133 |
                                                                                  |33019     |133 |134 |
                                                                                  |33029     |134 |135 |
                                                                                  |33039     |135 |201 |
                                                                                  |800       |201 |999 |
                                                                                  |1800      |999 |1999|
                                                                                  |1801      |1999|2999|
                                                                                  |1802      |2999|9999|
                                                                                  |1800444   |9999|null|
                                                                                  +----------+----+----+
                                                                                  

                                                                                  You can add additional sorting etc. Relies on narrow transformation when adding partition index. How you load may be an issue. Caching not considered.

                                                                                  If data is not ordered as stated above, a range partitioning needs to occur first.

                                                                                  Source https://stackoverflow.com/questions/71803991

                                                                                  QUESTION

                                                                                  What is the best way to store +3 millions records in Firestore?
                                                                                  Asked 2022-Apr-09 at 13:18

                                                                                  I want to store +3 millions records in my Firestore database and I would like to know what is the best way, practice, to do that.

                                                                                  In fact, I want to store every prices of 30 cryptos every 15 minutes since 01/01/2020.

                                                                                  For example:

                                                                                  • ETH price at 01/01/2020 at 00h00 = xxx
                                                                                  • ETH price at 01/01/2020 at 00h15 = xxx
                                                                                  • ETH price at 01/01/2020 at 00h30 = xxx
                                                                                  • ...
                                                                                  • ETH price at 09/04/2022 at 14h15 = xxx

                                                                                  and this, for 30 cryptos (or more).

                                                                                  So, 120 prices per day multiplied by 829 days multiplied by 30 cryptos ~= 3M records

                                                                                  I thought of saving this like this:

                                                                                  [Collection of Crypto] [Document of crypto] [Collection of dates] [Document of hour] [Price]

                                                                                  I don't know if this is the right way, that's why I come here :)

                                                                                  Of course, the goal of this database will be to retrieve ALL the historical prices of a currency that I would have selected. This will allow me to make statistics etc later.

                                                                                  Thanks for your help

                                                                                  ANSWER

                                                                                  Answered 2022-Apr-09 at 13:18

                                                                                  For the current structure, instead of creating a document every 15 minutes you can just create a "prices" document and store an array of format { time: "00:00", price: 100 } which will cost only 1 read to fetch prices of a given currency on a day instead of 96.

                                                                                  Alternatively, you can create a single collection "prices" and create a document everyday for each currency. A document in this collection can look like this:

                                                                                  {
                                                                                    name: "BTC",
                                                                                    date: "2022/04/09", // or Firestore timestamp
                                                                                    prices: [
                                                                                      { time: "00:05", price: 12.345 },
                                                                                      { time: "00:10", price: 6.345 },
                                                                                      { time: "00:15", price: 68.586 },
                                                                                    ]
                                                                                  }
                                                                                  

                                                                                  With this structure as well you can query rates a particular coin in a given date range. An example for this query:

                                                                                  const qSnap = await getDocs(
                                                                                    query(
                                                                                      collection(db, "prices"),
                                                                                      where("name", "==", "BTC"),
                                                                                      where("time", ">=", startDateTimestamp),
                                                                                      where("time", "<", endDateTimestamp)
                                                                                    )
                                                                                  );
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/71808107

                                                                                  QUESTION

                                                                                  spark-shell throws java.lang.reflect.InvocationTargetException on running
                                                                                  Asked 2022-Apr-01 at 19:53

                                                                                  When I execute run-example SparkPi, for example, it works perfectly, but when I run spark-shell, it throws these exceptions:

                                                                                  WARNING: An illegal reflective access operation has occurred
                                                                                  WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/C:/big_data/spark-3.2.0-bin-hadoop3.2-scala2.13/jars/spark-unsafe_2.13-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
                                                                                  WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
                                                                                  WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
                                                                                  WARNING: All illegal access operations will be denied in a future release
                                                                                  Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
                                                                                  Setting default log level to "WARN".
                                                                                  To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
                                                                                  Welcome to
                                                                                        ____              __
                                                                                       / __/__  ___ _____/ /__
                                                                                      _\ \/ _ \/ _ `/ __/  '_/
                                                                                     /___/ .__/\_,_/_/ /_/\_\   version 3.2.0
                                                                                        /_/
                                                                                  
                                                                                  Using Scala version 2.13.5 (OpenJDK 64-Bit Server VM, Java 11.0.9.1)
                                                                                  Type in expressions to have them evaluated.
                                                                                  Type :help for more information.
                                                                                  21/12/11 19:28:36 ERROR SparkContext: Error initializing SparkContext.
                                                                                  java.lang.reflect.InvocationTargetException
                                                                                          at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                                                                                          at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
                                                                                          at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
                                                                                          at org.apache.spark.executor.Executor.(Executor.scala:160)
                                                                                          at org.apache.spark.scheduler.local.LocalEndpoint.(LocalSchedulerBackend.scala:64)
                                                                                          at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
                                                                                          at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
                                                                                          at org.apache.spark.SparkContext.(SparkContext.scala:581)
                                                                                          at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                                                                                          at scala.Option.getOrElse(Option.scala:201)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                                                                                          at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                                                                                          at $line3.$read$$iw.(:5)
                                                                                          at $line3.$read.(:4)
                                                                                          at $line3.$read$.()
                                                                                          at $line3.$eval$.$print$lzycompute(:6)
                                                                                          at $line3.$eval$.$print(:5)
                                                                                          at $line3.$eval.$print()
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                                                                                          at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                                                                                          at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                                                                                          at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                                                                                          at scala.collection.immutable.List.foreach(List.scala:333)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                                                                                          at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                                                                                          at org.apache.spark.repl.Main$.main(Main.scala:59)
                                                                                          at org.apache.spark.repl.Main.main(Main.scala)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                                                                                          at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                                                                                          at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                                                                                          at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                                                                                          at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                                                                                          at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                                                                                  Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
                                                                                          at java.base/java.net.URI$Parser.fail(URI.java:2913)
                                                                                          at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
                                                                                          at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
                                                                                          at java.base/java.net.URI$Parser.parse(URI.java:3114)
                                                                                          at java.base/java.net.URI.(URI.java:600)
                                                                                          at org.apache.spark.repl.ExecutorClassLoader.(ExecutorClassLoader.scala:57)
                                                                                          ... 67 more
                                                                                  21/12/11 19:28:36 ERROR Utils: Uncaught exception in thread main
                                                                                  java.lang.NullPointerException
                                                                                          at org.apache.spark.scheduler.local.LocalSchedulerBackend.org$apache$spark$scheduler$local$LocalSchedulerBackend$$stop(LocalSchedulerBackend.scala:173)
                                                                                          at org.apache.spark.scheduler.local.LocalSchedulerBackend.stop(LocalSchedulerBackend.scala:144)
                                                                                          at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
                                                                                          at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2516)
                                                                                          at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2086)
                                                                                          at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1442)
                                                                                          at org.apache.spark.SparkContext.stop(SparkContext.scala:2086)
                                                                                          at org.apache.spark.SparkContext.(SparkContext.scala:677)
                                                                                          at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                                                                                          at scala.Option.getOrElse(Option.scala:201)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                                                                                          at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                                                                                          at $line3.$read$$iw.(:5)
                                                                                          at $line3.$read.(:4)
                                                                                          at $line3.$read$.()
                                                                                          at $line3.$eval$.$print$lzycompute(:6)
                                                                                          at $line3.$eval$.$print(:5)
                                                                                          at $line3.$eval.$print()
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                                                                                          at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                                                                                          at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                                                                                          at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                                                                                          at scala.collection.immutable.List.foreach(List.scala:333)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                                                                                          at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                                                                                          at org.apache.spark.repl.Main$.main(Main.scala:59)
                                                                                          at org.apache.spark.repl.Main.main(Main.scala)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                                                                                          at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                                                                                          at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                                                                                          at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                                                                                          at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                                                                                          at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                                                                                  21/12/11 19:28:36 WARN MetricsSystem: Stopping a MetricsSystem that is not running
                                                                                  21/12/11 19:28:36 ERROR Main: Failed to initialize Spark session.
                                                                                  java.lang.reflect.InvocationTargetException
                                                                                          at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                                                                                          at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
                                                                                          at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
                                                                                          at org.apache.spark.executor.Executor.(Executor.scala:160)
                                                                                          at org.apache.spark.scheduler.local.LocalEndpoint.(LocalSchedulerBackend.scala:64)
                                                                                          at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
                                                                                          at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
                                                                                          at org.apache.spark.SparkContext.(SparkContext.scala:581)
                                                                                          at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                                                                                          at scala.Option.getOrElse(Option.scala:201)
                                                                                          at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                                                                                          at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                                                                                          at $line3.$read$$iw.(:5)
                                                                                          at $line3.$read.(:4)
                                                                                          at $line3.$read$.()
                                                                                          at $line3.$eval$.$print$lzycompute(:6)
                                                                                          at $line3.$eval$.$print(:5)
                                                                                          at $line3.$eval.$print()
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                                                                                          at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                                                                                          at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                                                                                          at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                                                                                          at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                                                                                          at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                                                                                          at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                                                                                          at scala.collection.immutable.List.foreach(List.scala:333)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                                                                                          at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                                                                                          at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                                                                                          at org.apache.spark.repl.Main$.main(Main.scala:59)
                                                                                          at org.apache.spark.repl.Main.main(Main.scala)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                                                                          at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                                                                          at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                          at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                                                                                          at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                                                                                          at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                                                                                          at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                                                                                          at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                                                                                          at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                                                                                          at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                                                                                          at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                                                                                  Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
                                                                                          at java.base/java.net.URI$Parser.fail(URI.java:2913)
                                                                                          at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
                                                                                          at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
                                                                                          at java.base/java.net.URI$Parser.parse(URI.java:3114)
                                                                                          at java.base/java.net.URI.(URI.java:600)
                                                                                          at org.apache.spark.repl.ExecutorClassLoader.(ExecutorClassLoader.scala:57)
                                                                                          ... 67 more
                                                                                  21/12/11 19:28:36 ERROR Utils: Uncaught exception in thread shutdown-hook-0
                                                                                  java.lang.ExceptionInInitializerError
                                                                                          at org.apache.spark.executor.Executor.stop(Executor.scala:333)
                                                                                          at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
                                                                                          at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.util.Try$.apply(Try.scala:210)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
                                                                                          at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                                                                                          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                                                                                          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                                                                                          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                                                                                          at java.base/java.lang.Thread.run(Thread.java:829)
                                                                                  Caused by: java.lang.NullPointerException
                                                                                          at org.apache.spark.shuffle.ShuffleBlockPusher$.(ShuffleBlockPusher.scala:465)
                                                                                          ... 16 more
                                                                                  21/12/11 19:28:36 WARN ShutdownHookManager: ShutdownHook '' failed, java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
                                                                                  java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
                                                                                          at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
                                                                                          at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
                                                                                          at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
                                                                                          at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
                                                                                  Caused by: java.lang.ExceptionInInitializerError
                                                                                          at org.apache.spark.executor.Executor.stop(Executor.scala:333)
                                                                                          at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
                                                                                          at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
                                                                                          at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                                                                                          at scala.util.Try$.apply(Try.scala:210)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
                                                                                          at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
                                                                                          at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                                                                                          at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                                                                                          at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                                                                                          at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                                                                                          at java.base/java.lang.Thread.run(Thread.java:829)
                                                                                  Caused by: java.lang.NullPointerException
                                                                                          at org.apache.spark.shuffle.ShuffleBlockPusher$.(ShuffleBlockPusher.scala:465)
                                                                                          ... 16 more
                                                                                  

                                                                                  As I can see it caused by Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes, but I don't understand what does it mean exactly and how to deal with that

                                                                                  How can I solve this problem?

                                                                                  I use Spark 3.2.0 Pre-built for Apache Hadoop 3.3 and later (Scala 2.13)

                                                                                  JAVA_HOME, HADOOP_HOME, SPARK_HOME path variables are set.

                                                                                  ANSWER

                                                                                  Answered 2022-Jan-07 at 15:11

                                                                                  i face the same problem, i think Spark 3.2 is the problem itself

                                                                                  switched to Spark 3.1.2, it works fine

                                                                                  Source https://stackoverflow.com/questions/70317481

                                                                                  QUESTION

                                                                                  For function over multiple rows (i+1)?
                                                                                  Asked 2022-Mar-30 at 08:31

                                                                                  New to R, my apologies if there is an easy answer that I don't know of.

                                                                                  I have a dataframe with 127.124 observations and 5 variables

                                                                                  Head(SortedDF)

                                                                                         number Retention.time..min. Charge      m.z Group
                                                                                  102864   6947             12.58028      5 375.0021 Pro
                                                                                  68971   60641             23.36693      2 375.1373 Pro
                                                                                  75001  104156             24.54187      3 375.1540 Pro
                                                                                  87435  146322             22.69630      3 375.1540 Pro
                                                                                  82658   88256             22.32042      3 375.1541 Pro
                                                                                  113553  97971             14.54600      3 375.1566 Pro
                                                                                  ...
                                                                                  

                                                                                  I want to compare every row with the row underneath it (so basically rownumber vs rownumber +1) and see if they match. After reading the For and if-else functions, I came up with this code:

                                                                                  for (i in 1:dim(SortedDF)) 
                                                                                    if(abs(m.z[i]-m.z[i+1])<0.01 | abs(Retention.time..min.[i]-Retention.time..min.[i+1])<1 | (Charge[i]=Charge[i+1]) | Group[i]!=Group[i+1]) 
                                                                                      print("Match")
                                                                                    else
                                                                                      print("No match")
                                                                                  

                                                                                  However, this code does not work as it only prints out the first function function [1], and I'm not sure if i+1 is a thing. Is there any way to solve this not using i+1?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-30 at 08:31
                                                                                  library(tidyverse)
                                                                                  
                                                                                  data <- tibble(x = c(1, 1, 2), y = "a")
                                                                                  data
                                                                                  #> # A tibble: 3 ร— 2
                                                                                  #>       x y    
                                                                                  #>    
                                                                                  #> 1     1 a    
                                                                                  #> 2     1 a    
                                                                                  #> 3     2 a
                                                                                  
                                                                                  same_rows <-
                                                                                    data %>%
                                                                                    # consider all columns
                                                                                    unite(col = "all") %>%
                                                                                    transmute(same_as_next_row = all == lead(all))
                                                                                  
                                                                                  data %>%
                                                                                    bind_cols(same_rows)
                                                                                  #> # A tibble: 3 ร— 3
                                                                                  #>       x y     same_as_next_row
                                                                                  #>                
                                                                                  #> 1     1 a     TRUE            
                                                                                  #> 2     1 a     FALSE           
                                                                                  #> 3     2 a     NA
                                                                                  

                                                                                  Created on 2022-03-30 by the reprex package (v2.0.0)

                                                                                  library(tidyverse)
                                                                                  
                                                                                  data <- tibble::tribble(
                                                                                    ~id, ~number, ~Retention.time..min., ~Charge, ~m.z, ~Group,
                                                                                    102864, 6947, 12.58028, 5, 375.0021, "Pro",
                                                                                    68971, 60641, 23.36693, 2, 375.1373, "Pro",
                                                                                    75001, 104156, 24.54187, 3, 375.1540, "Pro",
                                                                                    87435, 146322, 22.69630, 3, 375.1540, "Pro",
                                                                                    82658, 88256, 22.32042, 3, 375.1541, "Pro",
                                                                                    113553, 97971, 14.54600, 3, 375.1566, "Pro"
                                                                                  )
                                                                                  
                                                                                  data %>%
                                                                                    mutate(
                                                                                      matches_with_next_row = (abs(m.z - lead(m.z)) < 0.01) |
                                                                                        (abs(Retention.time..min. - lead(Retention.time..min.)) < 1)
                                                                                    )
                                                                                  #> # A tibble: 6 ร— 7
                                                                                  #>       id number Retention.time..min. Charge   m.z Group matches_with_next_row
                                                                                  #>                                           
                                                                                  #> 1 102864   6947                 12.6      5  375. Pro   FALSE                
                                                                                  #> 2  68971  60641                 23.4      2  375. Pro   FALSE                
                                                                                  #> 3  75001 104156                 24.5      3  375. Pro   TRUE                 
                                                                                  #> 4  87435 146322                 22.7      3  375. Pro   TRUE                 
                                                                                  #> 5  82658  88256                 22.3      3  375. Pro   TRUE                 
                                                                                  #> 6 113553  97971                 14.5      3  375. Pro   NA
                                                                                  

                                                                                  Created on 2022-03-30 by the reprex package (v2.0.0)

                                                                                  Source https://stackoverflow.com/questions/71673259

                                                                                  QUESTION

                                                                                  Filling up shuffle buffer (this may take a while)
                                                                                  Asked 2022-Mar-28 at 20:44

                                                                                  I have a dataset that includes video frames partially 1000 real videos and 1000 deep fake videos. each video after preprocessing phase converted to the 300 frames in other worlds I have a dataset with 300000 images with Real(0) label and 300000 images with Fake(1) label. I want to train MesoNet with this data. I used costum DataGenerator class to handle train, validation, test data with 0.8,0.1,0.1 ratios but when I run the project show this message:

                                                                                  Filling up shuffle buffer (this may take a while):
                                                                                  

                                                                                  What can I do to solve this problem?

                                                                                  You can see the DataGenerator class below.

                                                                                  class DataGenerator(keras.utils.Sequence):
                                                                                  'Generates data for Keras'
                                                                                  def __init__(self, df, labels, batch_size =32, img_size = (224,224),
                                                                                               n_classes = 2, shuffle=True):
                                                                                      'Initialization'
                                                                                      self.batch_size = batch_size
                                                                                      self.labels = labels
                                                                                      self.df = df
                                                                                      self.img_size = img_size
                                                                                      self.n_classes = n_classes
                                                                                      self.shuffle = shuffle
                                                                                      self.batch_labels = []
                                                                                      self.batch_names = []
                                                                                      self.on_epoch_end()
                                                                                  
                                                                                  def __len__(self):
                                                                                      'Denotes the number of batches per epoch'
                                                                                      return int(np.floor(len(self.df) / self.batch_size))
                                                                                  
                                                                                  def __getitem__(self, index):
                                                                                      
                                                                                      batch_index = self.indexes[index * self.batch_size : (index + 1) * self.batch_size]
                                                                                      frame_paths = self.df.iloc[batch_index]["framePath"].values
                                                                                      frame_label = self.df.iloc[batch_index]["label"].values
                                                                                  
                                                                                      imgs = [cv2.imread(frame) for frame in frame_paths]
                                                                                      imgs = [cv2.cvtColor(img, cv2.COLOR_BGR2RGB) for img in imgs]
                                                                                      imgs = [
                                                                                               cv2.resize(img, self.img_size) for img in imgs if img.shape != self.img_size
                                                                                               ]
                                                                                      batch_imgs = np.asarray(imgs)
                                                                                      labels = list(map(int, frame_label))
                                                                                      y = np.array(labels)
                                                                                      self.batch_labels.extend(labels)
                                                                                      self.batch_names.extend([str(frame).split("\\")[-1] for frame in frame_paths])
                                                                                  
                                                                                      return (
                                                                                          batch_imgs,y  
                                                                                      )
                                                                                  
                                                                                  def on_epoch_end(self):
                                                                                      'Updates indexes after each epoch'
                                                                                      self.indexes = np.arange(len(self.df))
                                                                                      if self.shuffle == True:
                                                                                          np.random.shuffle(self.indexes)
                                                                                  

                                                                                  QUESTION

                                                                                  Designing Twitter Search - How to sort large datasets?
                                                                                  Asked 2022-Mar-24 at 17:25

                                                                                  I'm reading an article about how to design a Twitter Search. The basic idea is to map tweets based on their ids to servers where each server has the mapping

                                                                                  English word -> A set of tweetIds having this word

                                                                                  Now if we want to find all the tweets that have some word all we need is to query all servers and aggregate the results. The article casually suggests that we can also sort the results by some parameter like "popularity" but isn't that a heavy task, especially if the word is an hot word?

                                                                                  What is done in practice in such search systems?

                                                                                  Maybe some tradeoff are being used?

                                                                                  Thanks!

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-24 at 17:25

                                                                                  First of all, there are two types of indexes: local and global.

                                                                                  A local index is stored on the same computer as tweet data. For example, you may have 10 shards and each of these shards will have its own index; like word "car" -> sorted list of tweet ids.

                                                                                  When search is run we will have to send the query to every server. As we don't know where the most popular tweets are. That query will ask every server to return their top results. All of these results will be collected on the same box - the one executing the user request - and that process will pick top 10 of of entire population.

                                                                                  Since all results are already sorted in the index itself, it is a O(1) operation to pick top 10 results from all lists - as we will be doing simple heap/watermarking on set number of tweets.

                                                                                  Second nice property, we can do pagination - the next query will be also sent to every box with additional data - give me top 10, with popularity below X, where X is the popularity of last tweet returned to customer.

                                                                                  Global index is a different beast - it does not live on the same boxes as data (it could, but does not have to). In that case, when we search for a keyword, we know exactly where to look for. And the index itself is also sorted, hence it is fast to get top 10 most popular results (or get pagination).

                                                                                  Since the global index returns only tweet Ids and not tweet itself, we will have to lookup tweets for every id - this is called N+1 problem - 1 query to get a list of ids and then one query for every id. There are several ways to solve this - caching and data duplication are by far most common approaches.

                                                                                  Source https://stackoverflow.com/questions/71588238

                                                                                  QUESTION

                                                                                  Unnest Query optimisation for singular record
                                                                                  Asked 2022-Mar-24 at 11:45

                                                                                  I'm trying to optimise my query for when an internal customer only want to return one result *(and it's associated nested dataset). My aim is to reduce the query process size.

                                                                                  However, it appears to be the exact same value regardless of whether I'm querying for 1 record (with unnested 48,000 length array) or the whole dataset (10,000 records with unnest total 514,048,748 in total length of arrays)!

                                                                                  So my table results for one record query:

                                                                                  SELECT test_id, value
                                                                                  FROM , unnest(TimeSeries)timeseries
                                                                                  WHERE test_id= "T0003" and SignalName = "Distance";
                                                                                  

                                                                                  looks like this:

                                                                                  test_id value T0003 1.0 T0003 2.0 T0003 3.0 T0003 4.0

                                                                                  (48000 rows)

                                                                                  This will continue until value (Distance) = 48000m (48000 rows) for 1 record: WHERE == 'T0003.

                                                                                  Total process was 3.84GB

                                                                                  For whole table (~10,000 records):

                                                                                  SELECT test_id, value
                                                                                  FROM , unnest(TimeSeries)timeseries
                                                                                  WHERE SignalName = "Distance";
                                                                                  

                                                                                  looks like this:

                                                                                  test_id value T0001 1.0 T0001 2.0 T0001 3.0 T0001 4.0

                                                                                  (514,048,748 rows)

                                                                                  Total process was 3.84GB

                                                                                  Why are the process size the same for both queries and how can I optimise this for singular row extractions?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-24 at 11:45

                                                                                  This is happening because there is still need for a full table scan to find all the test IDs that are equal to the specified one.

                                                                                  It is not clear from your example which columns are part of the timeseries record. In case test_id is not one of them, I would suggest to cluster the table on the test_id column. By clustering, the data will be automatically organized according to the contents of the test_id column.

                                                                                  So, when you query with a filter on that column a full scan won't be needed to find all values.

                                                                                  Read more about clustered tables here.

                                                                                  Source https://stackoverflow.com/questions/71599650

                                                                                  QUESTION

                                                                                  handling million of rows for lookup operation using python
                                                                                  Asked 2022-Mar-19 at 11:27

                                                                                  I am new to data handling . I need to create python program to search a record from a samplefile1 in samplefile2. i am able to achieve it but for each record out of 200 rows in samplefile1 is looped over 200 rows in samplefile2 , it took 180 seconds complete execution time.

                                                                                  I am looking for something to be more time efficient so that i can do this task in minimum time .

                                                                                  My actual Dataset size is : 9million -> samplefile1 and 9million --> samplefile2.

                                                                                  Here is my code using Pandas.

                                                                                  sample1file1 rows:

                                                                                  number='7777777777' subscriber-id="7777777777" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777777@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  number='7777777778' subscriber-id="7777777778" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777778@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  number='7777777779' subscriber-id="7777777779" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777779@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  .........100 rows
                                                                                  

                                                                                  samplefile2 rows

                                                                                  number='7777777777' subscriber-id="7777777777" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777777@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  number='7777777778' subscriber-id="7777777778" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777778@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  number='7777777769' subscriber-id="7777777779" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777779@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
                                                                                  ........100 rows
                                                                                  
                                                                                  import time
                                                                                  import pandas as pd
                                                                                  
                                                                                  def timeit(func):
                                                                                      """
                                                                                      Decorator for measuring function's running time.
                                                                                      """
                                                                                      def measure_time(*args, **kw):
                                                                                          start_time = time.time()
                                                                                          result = func(*args, **kw)
                                                                                          print("Processing time of %s(): %.2f seconds."
                                                                                                % (func.__qualname__, time.time() - start_time))
                                                                                          return result
                                                                                  
                                                                                      return measure_time
                                                                                  
                                                                                  @timeit
                                                                                  def func():
                                                                                      df = pd.read_csv("sample_2.txt", names=["A1"], skiprows=0, sep=';')
                                                                                      df.drop(df.filter(regex="Unname"),axis=1, inplace=True)
                                                                                      finaldatafile1=df.fillna("TrackRow")
                                                                                      
                                                                                      df1=pd.read_csv("sample_1.txt",names=["A1"],skiprows=0,sep=';')
                                                                                      df1.drop(df.filter(regex="Unname"),axis=1, inplace=True)
                                                                                      finaldatafile2=df1.fillna("TrackRow")
                                                                                      indexdf=df.index
                                                                                      indexdf1=df1.index
                                                                                      ##### for loop for string to be matched (small datasets#######
                                                                                      for i in range(0,len(indexdf)-1):
                                                                                          lookup_value=finaldatafile1.iloc[[i]].to_string()
                                                                                         # print(lookup_value)
                                                                                      ######### for loop for lookup dataset( large dataset #########
                                                                                          for j in range(0,len(indexdf1)-1):
                                                                                              match_value=finaldatafile2.iloc[[j]].to_string()
                                                                                              if i is j:
                                                                                                  print (f"Its a match on lookup table position {j} and for string {lookup_value}")
                                                                                              else:
                                                                                                  print("no match found in complete dataset")
                                                                                  if __name__ == "__main__":
                                                                                        func()
                                                                                  
                                                                                  
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-19 at 11:27

                                                                                  I don't think using Pandas is helping here as you are just comparing whole lines. An alternative approach would be to load the first file as a set of lines. Then enumerate over the lines in the second file testing if it is in the set. This will be much faster:

                                                                                  @timeit
                                                                                  def func():
                                                                                      with open('sample_1.txt') as f_sample1:
                                                                                          data1 = set(f_sample1.read().splitlines())
                                                                                      
                                                                                      with open('sample_2.txt') as f_sample2:
                                                                                          data2 = f_sample2.read().splitlines()
                                                                                          
                                                                                      for index, entry in enumerate(data2):
                                                                                          if entry in data1:
                                                                                              print(f"It's a match on lookup table position {index} and for string\n{entry}")
                                                                                          else:
                                                                                              print("no match found in complete dataset")
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/71526523

                                                                                  QUESTION

                                                                                  split function does not return any observations with large dataset
                                                                                  Asked 2022-Mar-12 at 22:29

                                                                                  I have a dataframe like this:

                                                                                  seqnames       pos     strand    nucleotide     count
                                                                                      id1         12        +          A            13
                                                                                      id1         13        +          C            25
                                                                                      id2         24        +          G            10
                                                                                      id2         25        +          T            25
                                                                                      id2         26        +          A            10
                                                                                      id3         10        +          C            5
                                                                                  

                                                                                  But it has more than 100,000 rows in total, seqnames has 3138 levels. I would like to split it into lists of dataframes according to seqnames, so I used split function:

                                                                                  data_list <- split(data,data$seqnames)
                                                                                  

                                                                                  But it only returns something like this:

                                                                                  List of 3138
                                                                                   $ id1:'data.frame':    0 obs. of  6 variables:
                                                                                    ..$ seqnames  : Factor w/ 3138 levels "id1","id2",..: 
                                                                                    ..$ pos       : int(0) 
                                                                                    ..$ strand    : Factor w/ 3 levels "+","-","*": 
                                                                                    ..$ nucleotide: Factor w/ 8 levels "A","C","G","T",..: 
                                                                                    ..$ count     : int(0) 
                                                                                    ..$ sample_id : chr(0) 
                                                                                   $ id2:'data.frame':    0 obs. of  6 variables:
                                                                                    ..$ seqnames  : Factor w/ 3138 levels "id1","id2",..: 
                                                                                    ..$ pos       : int(0) 
                                                                                    ..$ strand    : Factor w/ 3 levels "+","-","*": 
                                                                                    ..$ nucleotide: Factor w/ 8 levels "A","C","G","T",..: 
                                                                                    ..$ count     : int(0) 
                                                                                    ..$ sample_id : chr(0) 
                                                                                  

                                                                                  I can't figure out why it is like this because I have used it on a made up dataframe with all numbers (of course, not as many rows as this one) and it works. How can I solve this problem?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-12 at 22:29

                                                                                  It is just that there are many unused levels as the column 'seqnames' is a factor. With split, there is an option to drop (drop = TRUE - by default it is FALSE) to remove those list elements. Otherwise, they will return as data.frame with 0 rows. If we want those elements to be replaced by NULL, then find those elements where the number of rows (nrow) are 0 and assign it to NULL

                                                                                  data_list <- split(data,data$seqnames)
                                                                                  > str(data_list)
                                                                                  List of 5
                                                                                   $ id1:'data.frame':    2 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 1 1
                                                                                    ..$ pos       : int [1:2] 12 13
                                                                                    ..$ strand    : chr [1:2] "+" "+"
                                                                                    ..$ nucleotide: chr [1:2] "A" "C"
                                                                                    ..$ count     : int [1:2] 13 25
                                                                                   $ id2:'data.frame':    3 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 2 2 2
                                                                                    ..$ pos       : int [1:3] 24 25 26
                                                                                    ..$ strand    : chr [1:3] "+" "+" "+"
                                                                                    ..$ nucleotide: chr [1:3] "G" "T" "A"
                                                                                    ..$ count     : int [1:3] 10 25 10
                                                                                   $ id3:'data.frame':    1 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 3
                                                                                    ..$ pos       : int 10
                                                                                    ..$ strand    : chr "+"
                                                                                    ..$ nucleotide: chr "C"
                                                                                    ..$ count     : int 5
                                                                                   $ id4:'data.frame':    0 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 
                                                                                    ..$ pos       : int(0) 
                                                                                    ..$ strand    : chr(0) 
                                                                                    ..$ nucleotide: chr(0) 
                                                                                    ..$ count     : int(0) 
                                                                                   $ id5:'data.frame':    0 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 
                                                                                    ..$ pos       : int(0) 
                                                                                    ..$ strand    : chr(0) 
                                                                                    ..$ nucleotide: chr(0) 
                                                                                    ..$ count     : int(0) 
                                                                                  

                                                                                  Doing the assignment to NULL

                                                                                  data_list[sapply(data_list, nrow) == 0] <- list(NULL)
                                                                                  

                                                                                  -check again

                                                                                  > str(data_list)
                                                                                  List of 5
                                                                                   $ id1:'data.frame':    2 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 1 1
                                                                                    ..$ pos       : int [1:2] 12 13
                                                                                    ..$ strand    : chr [1:2] "+" "+"
                                                                                    ..$ nucleotide: chr [1:2] "A" "C"
                                                                                    ..$ count     : int [1:2] 13 25
                                                                                   $ id2:'data.frame':    3 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 2 2 2
                                                                                    ..$ pos       : int [1:3] 24 25 26
                                                                                    ..$ strand    : chr [1:3] "+" "+" "+"
                                                                                    ..$ nucleotide: chr [1:3] "G" "T" "A"
                                                                                    ..$ count     : int [1:3] 10 25 10
                                                                                   $ id3:'data.frame':    1 obs. of  5 variables:
                                                                                    ..$ seqnames  : Factor w/ 5 levels "id1","id2","id3",..: 3
                                                                                    ..$ pos       : int 10
                                                                                    ..$ strand    : chr "+"
                                                                                    ..$ nucleotide: chr "C"
                                                                                    ..$ count     : int 5
                                                                                   $ id4: NULL
                                                                                   $ id5: NULL
                                                                                  
                                                                                  data
                                                                                  data <- structure(list(seqnames = structure(c(1L, 1L, 2L, 2L, 2L, 
                                                                                  3L), .Label = c("id1", 
                                                                                  "id2", "id3", "id4", "id5"), class = "factor"), pos = c(12L, 
                                                                                  13L, 24L, 25L, 26L, 10L), strand = c("+", "+", "+", "+", "+", 
                                                                                  "+"), nucleotide = c("A", "C", "G", "T", "A", "C"), count = c(13L, 
                                                                                  25L, 10L, 25L, 10L, 5L)), row.names = c(NA, -6L), class = "data.frame")
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/71453084

                                                                                  Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                                                                                  Vulnerabilities

                                                                                  No vulnerabilities reported

                                                                                  Install SZT-bigdata

                                                                                  You can download it from GitHub.

                                                                                  Support

                                                                                  For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
                                                                                  Find more information at:
                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kitsโ€‹
                                                                                  Save this library and start creating your kit
                                                                                  CLONE
                                                                                • HTTPS

                                                                                  https://github.com/geekyouth/SZT-bigdata.git

                                                                                • CLI

                                                                                  gh repo clone geekyouth/SZT-bigdata

                                                                                • sshUrl

                                                                                  git@github.com:geekyouth/SZT-bigdata.git

                                                                                • Share this Page

                                                                                  share link

                                                                                  Explore Related Topics

                                                                                  Consider Popular Scala Libraries

                                                                                  Try Top Libraries by geekyouth

                                                                                  litemall-kl

                                                                                  by geekyouthJava

                                                                                  geekyouth.github.io

                                                                                  by geekyouthJavaScript

                                                                                  uptime-status

                                                                                  by geekyouthJavaScript

                                                                                  static-website-demo

                                                                                  by geekyouthCSS

                                                                                  flink-pi

                                                                                  by geekyouthJava

                                                                                  Compare Scala Libraries with Highest Support

                                                                                  spark

                                                                                  by apache

                                                                                  prisma1

                                                                                  by prisma

                                                                                  playframework

                                                                                  by playframework

                                                                                  scala

                                                                                  by scala

                                                                                  gitbucket

                                                                                  by gitbucket

                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kitsโ€‹
                                                                                  Save this library and start creating your kit