kandi background
Explore Kits

hadoop | To run Hadoop on Mesos you need to add the hadoop-mesos-0

 by   mesos Java Version: 0.1.0 License: No License

 by   mesos Java Version: 0.1.0 License: No License

Download this library from

kandi X-RAY | hadoop Summary

hadoop is a Java library typically used in Big Data, Docker, Spark, Hadoop applications. hadoop has no bugs, it has build file available and it has high support. However hadoop has 1 vulnerabilities. You can download it from GitHub.
To run Hadoop on Mesos you need to add the hadoop-mesos-0.1.0.jar library to your Hadoop distribution (any distribution that uses protobuf > 2.5.0) and set some new configuration properties. Read on for details. The pom.xml included is configured and tested against CDH5 and MRv1. Hadoop on Mesos does not currently support YARN (and MRv2). To use the metrics feature (which uses the CodaHale Metrics library), you need to install libsnappy. The snappy-java package also includes a bundled version of libsnappyjava.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • hadoop has a highly active ecosystem.
  • It has 177 star(s) with 85 fork(s). There are 44 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 14 open issues and 16 have been closed. On average issues are closed in 208 days. There are 3 open pull requests and 0 closed requests.
  • It has a negative sentiment in the developer community.
  • The latest version of hadoop is 0.1.0
hadoop Support
Best in #Java
Average in #Java
hadoop Support
Best in #Java
Average in #Java

quality kandi Quality

  • hadoop has 0 bugs and 0 code smells.
hadoop Quality
Best in #Java
Average in #Java
hadoop Quality
Best in #Java
Average in #Java

securitySecurity

  • hadoop has 1 vulnerability issues reported (0 critical, 1 high, 0 medium, 0 low).
  • hadoop code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
hadoop Security
Best in #Java
Average in #Java
hadoop Security
Best in #Java
Average in #Java

license License

  • hadoop does not have a standard license declared.
  • Check the repository for any license declaration and review the terms closely.
  • Without a license, all rights are reserved, and you cannot use the library in your applications.
hadoop License
Best in #Java
Average in #Java
hadoop License
Best in #Java
Average in #Java

buildReuse

  • hadoop releases are available to install and integrate.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
hadoop Reuse
Best in #Java
Average in #Java
hadoop Reuse
Best in #Java
Average in #Java
Top functions reviewed by kandi - BETA

kandi has reviewed hadoop and discovered the below as its top functions. This is intended to give you an instant insight into hadoop implemented functionality, and help decide if they suit your requirements.

  • Compute the number of running tasks .
    • Builds the container info .
      • Starts the task scheduler .
        • Schedules the idle task to be idle .
          • Launches a TaskTracker .
            • Determine the slots for this resource offer .
              • Revokes the slots for the tasktracker .
                • Update the status of a TaskTracker .
                  • Schedules the periodic cleanup timer .
                    • Builds the container info from configuration .

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      hadoop Key Features

                      Hadoop on Mesos

                      hadoop Examples and Code Snippets

                      See all related Code Snippets

                      Hadoop on Mesos

                      copy iconCopydownload iconDownload
                      mvn package
                      

                      spark-shell exception org.apache.spark.SparkException: Exception thrown in awaitResult

                      copy iconCopydownload iconDownload
                      export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 
                      export HADOOP_HOME=/mnt/d/soft/hadoop-2.8.5 
                      export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
                      export SPARK_HOME=$SPARK_HOME 
                      export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop/ 
                      export SPARK_MASTER_HOST=127.0.0.1 
                      export SPARK_LOCAL_IP=127.0.0.1
                      

                      How to read a csv file from s3 bucket using pyspark

                      copy iconCopydownload iconDownload
                      --packages org.apache.hadoop:hadoop-aws:3.2.0
                      
                      spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "<access_key>")
                      spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "<secret_key>")
                      
                      spark.read.csv("s3a://bucket/file.csv")
                      
                      --packages org.apache.hadoop:hadoop-aws:3.2.0
                      
                      spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "<access_key>")
                      spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "<secret_key>")
                      
                      spark.read.csv("s3a://bucket/file.csv")
                      
                      --packages org.apache.hadoop:hadoop-aws:3.2.0
                      
                      spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "<access_key>")
                      spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "<secret_key>")
                      
                      spark.read.csv("s3a://bucket/file.csv")
                      
                      print(f'pyspark hadoop version:  
                      
                      {spark.sparkContext._jvm.org.apache.hadoop.util.VersionInfo.getVersion()}')
                      
                      
                      ls jars/hadoop*.jar
                      
                      spark-submit runner.py --packages org.apache.hadoop:hadoop-aws:3.3.1
                      
                      print(f'pyspark hadoop version:  
                      
                      {spark.sparkContext._jvm.org.apache.hadoop.util.VersionInfo.getVersion()}')
                      
                      
                      ls jars/hadoop*.jar
                      
                      spark-submit runner.py --packages org.apache.hadoop:hadoop-aws:3.3.1
                      
                      print(f'pyspark hadoop version:  
                      
                      {spark.sparkContext._jvm.org.apache.hadoop.util.VersionInfo.getVersion()}')
                      
                      
                      ls jars/hadoop*.jar
                      
                      spark-submit runner.py --packages org.apache.hadoop:hadoop-aws:3.3.1
                      

                      Hadoop to SQL through SSIS Package : Data incorrect format

                      copy iconCopydownload iconDownload
                      (DT_WSTR, 4000)(DT_STR, 4000, 1252)[ColumnName]
                      

                      How to run spark 3.2.0 on google dataproc?

                      copy iconCopydownload iconDownload
                      SPARK_HOME="/opt/conda/miniconda3/envs/your_sample_env/lib/python/site-packages/pyspark"
                      SPARK_CONF="/usr/lib/spark/conf"
                      
                      spark.yarn.jars=local:/usr/lib/spark/jars/*
                      spark.yarn.unmanagedAM.enabled=true
                      
                      
                      SPARK_HOME="/opt/conda/miniconda3/envs/your_sample_env/lib/python/site-packages/pyspark"
                      SPARK_CONF="/usr/lib/spark/conf"
                      
                      spark.yarn.jars=local:/usr/lib/spark/jars/*
                      spark.yarn.unmanagedAM.enabled=true
                      
                      
                      function main() {
                        install_pip
                        pip install pyspark==3.2.0
                        sed -i '4d;27d' /usr/lib/spark/conf/spark-defaults.conf
                        cat << EOF | tee -a /etc/profile.d/custom_env.sh /etc/*bashrc >/dev/null
                      export SPARK_HOME=/opt/conda/miniconda3/lib/python3.8/site-packages/pyspark/
                      export SPARK_CONF=/usr/lib/spark/conf
                      EOF
                        sed -i 's/\/usr\/lib\/spark/\/opt\/conda\/miniconda3\/lib\/python3.8\/site-packages\/pyspark\//g' /opt/conda/miniconda3/share/jupyter/kernels/python3/kernel.json
                      
                        if [[ -z "${PACKAGES}" ]]; then
                          echo "WARNING: requirements empty"
                          exit 0
                        fi
                        run_with_retry pip install --upgrade ${PACKAGES}
                      
                      }

                      AttributeError: Can't get attribute 'new_block' on &lt;module 'pandas.core.internals.blocks'&gt;

                      copy iconCopydownload iconDownload
                      import numpy as np 
                      import pandas as pd
                      df =pd.DataFrame(np.random.rand(3,6))
                      
                      with open("dump_from_v1.3.4.pickle", "wb") as f: 
                          pickle.dump(df, f) 
                      
                      quit()
                      
                      import pickle
                      
                      with open("dump_from_v1.3.4.pickle", "rb") as f: 
                          df = pickle.load(f) 
                      
                      
                      ---------------------------------------------------------------------------
                      AttributeError                            Traceback (most recent call last)
                      <ipython-input-2-ff5c218eca92> in <module>
                            1 with open("dump_from_v1.3.4.pickle", "rb") as f:
                      ----> 2     df = pickle.load(f)
                            3 
                      
                      AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>
                      
                      import numpy as np 
                      import pandas as pd
                      df =pd.DataFrame(np.random.rand(3,6))
                      
                      with open("dump_from_v1.3.4.pickle", "wb") as f: 
                          pickle.dump(df, f) 
                      
                      quit()
                      
                      import pickle
                      
                      with open("dump_from_v1.3.4.pickle", "rb") as f: 
                          df = pickle.load(f) 
                      
                      
                      ---------------------------------------------------------------------------
                      AttributeError                            Traceback (most recent call last)
                      <ipython-input-2-ff5c218eca92> in <module>
                            1 with open("dump_from_v1.3.4.pickle", "rb") as f:
                      ----> 2     df = pickle.load(f)
                            3 
                      
                      AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>
                      

                      Cannot find conda info. Please verify your conda installation on EMR

                      copy iconCopydownload iconDownload
                      wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh  -O /home/hadoop/miniconda.sh \
                          && /bin/bash ~/miniconda.sh -b -p $HOME/conda
                      
                      echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
                      
                      
                      conda config --set always_yes yes --set changeps1 no
                      conda config -f --add channels conda-forge
                      
                      
                      conda create -n zoo python=3.7 # "zoo" is conda environment name
                      conda init bash
                      source activate zoo
                      conda install python 3.7.0 -c conda-forge orca 
                      sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
                      
                      “spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
                      "spark.pyspark.virtualenv.enabled": "true",
                      "spark.pyspark.virtualenv.type":"native",
                      "spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
                      "zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
                      "zeppelin.python": "/home/hadoop/conda/bin/python"
                      
                      wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh  -O /home/hadoop/miniconda.sh \
                          && /bin/bash ~/miniconda.sh -b -p $HOME/conda
                      
                      echo -e '\n export PATH=$HOME/conda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
                      
                      
                      conda config --set always_yes yes --set changeps1 no
                      conda config -f --add channels conda-forge
                      
                      
                      conda create -n zoo python=3.7 # "zoo" is conda environment name
                      conda init bash
                      source activate zoo
                      conda install python 3.7.0 -c conda-forge orca 
                      sudo /home/hadoop/conda/envs/zoo/bin/python3.7 -m pip install virtualenv
                      
                      “spark.pyspark.python": "/home/hadoop/conda/envs/zoo/bin/python3",
                      "spark.pyspark.virtualenv.enabled": "true",
                      "spark.pyspark.virtualenv.type":"native",
                      "spark.pyspark.virtualenv.bin.path":"/home/hadoop/conda/envs/zoo/bin/,
                      "zeppelin.pyspark.python" : "/home/hadoop/conda/bin/python",
                      "zeppelin.python": "/home/hadoop/conda/bin/python"
                      

                      PySpark runs in YARN client mode but fails in cluster mode for &quot;User did not initialize spark context!&quot;

                      copy iconCopydownload iconDownload
                      from pyspark.sql import SparkSession
                      
                      spark = SparkSession.builder \
                                          .appName('MySparkApp') \
                                          .getOrCreate()
                      

                      Where to find spark log in dataproc when running job on cluster mode

                      copy iconCopydownload iconDownload
                      resource.type="cloud_dataproc_cluster" resource.labels.cluster_name="my_cluster_name" 
                      resource.labels.cluster_uuid="aaaaa-123435-bbbbbb-ccccc"
                      severity=DEFAULT
                      jsonPayload.container_logname="stdout"
                      jsonPayload.message!=""
                      log_name="projects/my-project_id/logs/yarn-userlogs"
                      

                      Apache Hive fails to initialize on Windows 10 and Cygwin

                      copy iconCopydownload iconDownload
                      schematool -dbType derby -initSchema
                      Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
                      org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
                      Underlying cause: java.io.IOException : Schema script failed, errorcode 2
                      Use --verbose for detailed stacktrace.
                      *** schemaTool failed ***
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_ASCII" (C CHAR(1)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.ascii';
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_MATCHES" (TEXT VARCHAR(8000),PATTERN VARCHAR(8000)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' ;
                      
                      Initialization script completed
                      schemaTool completed
                      
                      schematool -dbType derby -initSchema
                      Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
                      org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
                      Underlying cause: java.io.IOException : Schema script failed, errorcode 2
                      Use --verbose for detailed stacktrace.
                      *** schemaTool failed ***
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_ASCII" (C CHAR(1)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.ascii';
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_MATCHES" (TEXT VARCHAR(8000),PATTERN VARCHAR(8000)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' ;
                      
                      Initialization script completed
                      schemaTool completed
                      
                      schematool -dbType derby -initSchema
                      Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
                      org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
                      Underlying cause: java.io.IOException : Schema script failed, errorcode 2
                      Use --verbose for detailed stacktrace.
                      *** schemaTool failed ***
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_ASCII" (C CHAR(1)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.ascii';
                      
                      --CREATE FUNCTION "APP"."NUCLEUS_MATCHES" (TEXT VARCHAR(8000),PATTERN VARCHAR(8000)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' ;
                      
                      Initialization script completed
                      schemaTool completed
                      

                      Why is repartition faster than partitionBy in Spark?

                      copy iconCopydownload iconDownload
                      spark.range(1000).withColumn("partition", 'id % 100)
                          .repartition('partition).write.csv("/tmp/test.csv")
                      
                      spark.range(1000).withColumn("partition", 'id % 100)
                          .write.partitionBy("partition").csv("/tmp/test2.csv")
                      
                      spark.range(1000).withColumn("partition", 'id % 100)
                          .repartition('partition).write.csv("/tmp/test.csv")
                      
                      spark.range(1000).withColumn("partition", 'id % 100)
                          .write.partitionBy("partition").csv("/tmp/test2.csv")
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      df = spark.read.format("xml") \
                        .options(rowTag="DeviceData") \
                        .load(file_path, schema=meter_data) \
                      
                      .withColumn("partition", hash(col("_DeviceName")).cast("Long") % num_partitions) \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      output_path + "\partition=0\"
                      output_path + "\partition=1\"
                      output_path + "\partition=99\"
                      
                      .coalesce(num_partitions) \
                      .write.format("json") \
                      .partitionBy("partition") \
                      
                      .repartition("partition") \
                      .write.format("json") \
                      .partitionBy("partition") \
                      

                      See all related Code Snippets

                      Community Discussions

                      Trending Discussions on hadoop
                      • spark-shell throws java.lang.reflect.InvocationTargetException on running
                      • spark-shell exception org.apache.spark.SparkException: Exception thrown in awaitResult
                      • determine written object paths with Pyspark 3.2.1 + hadoop 3.3.2
                      • How to read a csv file from s3 bucket using pyspark
                      • Hadoop to SQL through SSIS Package : Data incorrect format
                      • How to run spark 3.2.0 on google dataproc?
                      • AttributeError: Can't get attribute 'new_block' on &lt;module 'pandas.core.internals.blocks'&gt;
                      • Cannot find conda info. Please verify your conda installation on EMR
                      • PySpark runs in YARN client mode but fails in cluster mode for &quot;User did not initialize spark context!&quot;
                      • Where to find spark log in dataproc when running job on cluster mode
                      Trending Discussions on hadoop

                      QUESTION

                      spark-shell throws java.lang.reflect.InvocationTargetException on running

                      Asked 2022-Apr-01 at 19:53

                      When I execute run-example SparkPi, for example, it works perfectly, but when I run spark-shell, it throws these exceptions:

                      WARNING: An illegal reflective access operation has occurred
                      WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/C:/big_data/spark-3.2.0-bin-hadoop3.2-scala2.13/jars/spark-unsafe_2.13-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
                      WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
                      WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
                      WARNING: All illegal access operations will be denied in a future release
                      Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
                      Setting default log level to "WARN".
                      To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
                      Welcome to
                            ____              __
                           / __/__  ___ _____/ /__
                          _\ \/ _ \/ _ `/ __/  '_/
                         /___/ .__/\_,_/_/ /_/\_\   version 3.2.0
                            /_/
                      
                      Using Scala version 2.13.5 (OpenJDK 64-Bit Server VM, Java 11.0.9.1)
                      Type in expressions to have them evaluated.
                      Type :help for more information.
                      21/12/11 19:28:36 ERROR SparkContext: Error initializing SparkContext.
                      java.lang.reflect.InvocationTargetException
                              at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                              at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                              at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
                              at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
                              at org.apache.spark.executor.Executor.<init>(Executor.scala:160)
                              at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64)
                              at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
                              at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
                              at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
                              at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                              at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                              at scala.Option.getOrElse(Option.scala:201)
                              at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                              at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                              at $line3.$read$$iw.<init>(<console>:5)
                              at $line3.$read.<init>(<console>:4)
                              at $line3.$read$.<clinit>(<console>)
                              at $line3.$eval$.$print$lzycompute(<synthetic>:6)
                              at $line3.$eval$.$print(<synthetic>:5)
                              at $line3.$eval.$print(<synthetic>)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                              at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                              at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                              at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                              at scala.collection.immutable.List.foreach(List.scala:333)
                              at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                              at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                              at org.apache.spark.repl.Main$.main(Main.scala:59)
                              at org.apache.spark.repl.Main.main(Main.scala)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                              at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                              at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                              at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                              at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                              at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                              at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                              at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                      Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
                              at java.base/java.net.URI$Parser.fail(URI.java:2913)
                              at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
                              at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
                              at java.base/java.net.URI$Parser.parse(URI.java:3114)
                              at java.base/java.net.URI.<init>(URI.java:600)
                              at org.apache.spark.repl.ExecutorClassLoader.<init>(ExecutorClassLoader.scala:57)
                              ... 67 more
                      21/12/11 19:28:36 ERROR Utils: Uncaught exception in thread main
                      java.lang.NullPointerException
                              at org.apache.spark.scheduler.local.LocalSchedulerBackend.org$apache$spark$scheduler$local$LocalSchedulerBackend$$stop(LocalSchedulerBackend.scala:173)
                              at org.apache.spark.scheduler.local.LocalSchedulerBackend.stop(LocalSchedulerBackend.scala:144)
                              at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
                              at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2516)
                              at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2086)
                              at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1442)
                              at org.apache.spark.SparkContext.stop(SparkContext.scala:2086)
                              at org.apache.spark.SparkContext.<init>(SparkContext.scala:677)
                              at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                              at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                              at scala.Option.getOrElse(Option.scala:201)
                              at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                              at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                              at $line3.$read$$iw.<init>(<console>:5)
                              at $line3.$read.<init>(<console>:4)
                              at $line3.$read$.<clinit>(<console>)
                              at $line3.$eval$.$print$lzycompute(<synthetic>:6)
                              at $line3.$eval$.$print(<synthetic>:5)
                              at $line3.$eval.$print(<synthetic>)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                              at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                              at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                              at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                              at scala.collection.immutable.List.foreach(List.scala:333)
                              at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                              at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                              at org.apache.spark.repl.Main$.main(Main.scala:59)
                              at org.apache.spark.repl.Main.main(Main.scala)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                              at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                              at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                              at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                              at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                              at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                              at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                              at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                      21/12/11 19:28:36 WARN MetricsSystem: Stopping a MetricsSystem that is not running
                      21/12/11 19:28:36 ERROR Main: Failed to initialize Spark session.
                      java.lang.reflect.InvocationTargetException
                              at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                              at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                              at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
                              at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
                              at org.apache.spark.executor.Executor.<init>(Executor.scala:160)
                              at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64)
                              at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
                              at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
                              at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
                              at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
                              at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
                              at scala.Option.getOrElse(Option.scala:201)
                              at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
                              at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
                              at $line3.$read$$iw.<init>(<console>:5)
                              at $line3.$read.<init>(<console>:4)
                              at $line3.$read$.<clinit>(<console>)
                              at $line3.$eval$.$print$lzycompute(<synthetic>:6)
                              at $line3.$eval$.$print(<synthetic>:5)
                              at $line3.$eval.$print(<synthetic>)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
                              at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
                              at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
                              at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
                              at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
                              at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
                              at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
                              at scala.collection.immutable.List.foreach(List.scala:333)
                              at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
                              at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
                              at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
                              at org.apache.spark.repl.Main$.doMain(Main.scala:84)
                              at org.apache.spark.repl.Main$.main(Main.scala:59)
                              at org.apache.spark.repl.Main.main(Main.scala)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                              at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                              at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
                              at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
                              at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                              at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                              at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
                              at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
                              at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
                      Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
                              at java.base/java.net.URI$Parser.fail(URI.java:2913)
                              at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
                              at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
                              at java.base/java.net.URI$Parser.parse(URI.java:3114)
                              at java.base/java.net.URI.<init>(URI.java:600)
                              at org.apache.spark.repl.ExecutorClassLoader.<init>(ExecutorClassLoader.scala:57)
                              ... 67 more
                      21/12/11 19:28:36 ERROR Utils: Uncaught exception in thread shutdown-hook-0
                      java.lang.ExceptionInInitializerError
                              at org.apache.spark.executor.Executor.stop(Executor.scala:333)
                              at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
                              at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
                              at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
                              at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.util.Try$.apply(Try.scala:210)
                              at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
                              at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
                              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                              at java.base/java.lang.Thread.run(Thread.java:829)
                      Caused by: java.lang.NullPointerException
                              at org.apache.spark.shuffle.ShuffleBlockPusher$.<clinit>(ShuffleBlockPusher.scala:465)
                              ... 16 more
                      21/12/11 19:28:36 WARN ShutdownHookManager: ShutdownHook '' failed, java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
                      java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
                              at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
                              at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
                              at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
                              at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
                      Caused by: java.lang.ExceptionInInitializerError
                              at org.apache.spark.executor.Executor.stop(Executor.scala:333)
                              at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
                              at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
                              at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
                              at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
                              at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
                              at scala.util.Try$.apply(Try.scala:210)
                              at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
                              at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
                              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                              at java.base/java.lang.Thread.run(Thread.java:829)
                      Caused by: java.lang.NullPointerException
                              at org.apache.spark.shuffle.ShuffleBlockPusher$.<clinit>(ShuffleBlockPusher.scala:465)
                              ... 16 more
                      

                      As I can see it caused by Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes, but I don't understand what does it mean exactly and how to deal with that

                      How can I solve this problem?

                      I use Spark 3.2.0 Pre-built for Apache Hadoop 3.3 and later (Scala 2.13)

                      JAVA_HOME, HADOOP_HOME, SPARK_HOME path variables are set.

                      ANSWER

                      Answered 2022-Jan-07 at 15:11

                      i face the same problem, i think Spark 3.2 is the problem itself

                      switched to Spark 3.1.2, it works fine

                      Source https://stackoverflow.com/questions/70317481

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      HDFS clients interact with a servlet on the DataNode to browse the HDFS namespace. The NameNode is provided as a query parameter that is not validated in Apache Hadoop before 2.7.0.
                      The HDFS web UI in Apache Hadoop before 2.7.0 is vulnerable to a cross-site scripting (XSS) attack through an unescaped query parameter.

                      Install hadoop

                      You can download it from GitHub.
                      You can use hadoop like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the hadoop component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Share this Page

                      share link
                      Consider Popular Java Libraries
                      Try Top Libraries by mesos
                      Compare Java Libraries with Highest Support
                      Compare Java Libraries with Highest Quality
                      Compare Java Libraries with Highest Security
                      Compare Java Libraries with Permissive License
                      Compare Java Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.