Scala is a statically typed general-purpose language. It supports both object-oriented and functional programming. Scala's core purpose is to overcome the drawbacks of Java.

Popular New Releases in Scala

prisma1

Release 1.34.12 (2021-01-12)

scala

Scala 2.13.8

playframework

Play 2.8.13

akka

Akka 2.6.18

gitbucket

Popular Libraries in Scala

spark

by apache doticonscaladoticon

star image 32507 doticonApache-2.0

Apache Spark - A unified analytics engine for large-scale data processing

prisma1

by prisma doticonscaladoticon

star image 16830 doticonApache-2.0

💾 Database Tools incl. ORM, Migrations and Admin UI (Postgres, MySQL & MongoDB)

scala

by scala doticonscaladoticon

star image 13628 doticonApache-2.0

Scala 2 compiler and standard library. For bugs, see scala/bug

attic-predictionio

by apache doticonscaladoticon

star image 12509 doticonApache-2.0

PredictionIO, a machine learning server for developers and ML engineers.

playframework

by playframework doticonscaladoticon

star image 12113 doticonApache-2.0

Play Framework

akka

by akka doticonscaladoticon

star image 12082 doticonNOASSERTION

Build highly concurrent, distributed, and resilient message-driven applications on the JVM

lila

by lichess-org doticonscaladoticon

star image 11440 doticonNOASSERTION

♞ lichess.org: the forever free, adless and open source chess server ♞

lila

by ornicar doticonscaladoticon

star image 10777 doticonNOASSERTION

♞ lichess.org: the forever free, adless and open source chess server ♞

CMAK

by yahoo doticonscaladoticon

star image 10302 doticonApache-2.0

CMAK is a tool for managing Apache Kafka clusters

Trending New libraries in Scala

XiangShan

by OpenXiangShan doticonscaladoticon

star image 2649 doticonNOASSERTION

Open-source high-performance RISC-V processor

metarank

by metarank doticonscaladoticon

star image 1395 doticonApache-2.0

A low code Machine Learning tool that personalizes product listings, articles, recommendations, and search results in order to boost sales. A friendly Learn-to-Rank engine

SZT-bigdata

by geekyouth doticonscaladoticon

star image 1137 doticonGPL-3.0

深圳地铁大数据客流分析系统🚇🚄🌟

NutShell

by OSCPU doticonscaladoticon

star image 865 doticonNOASSERTION

RISC-V SoC designed by students in UCAS

MiNLP

by XiaoMi doticonscaladoticon

star image 715 doticonApache-2.0

XiaoMi Natural Language Processing Toolkits

feathr

by linkedin doticonscaladoticon

star image 577 doticonApache-2.0

Feathr – An Enterprise-Grade, High Performance Feature Store

zio-http

by dream11 doticonscaladoticon

star image 459 doticonMIT

A scala library to write Http apps.

LakeSoul

by meta-soul doticonscaladoticon

star image 417 doticonApache-2.0

A Table Structure Storage to Unify Batch and Streaming Data Processing

trading

by gvolpe doticonscaladoticon

star image 404 doticonApache-2.0

💱 Trading application written in Scala 3 that showcases an Event-Driven Architecture (EDA) and Functional Programming (FP)

Top Authors in Scala

1

scala-steward

729 Libraries

star icon6

2

hmrc

711 Libraries

star icon756

3

MetadataGitTesting

383 Libraries

star icon0

4

nk-justai-test

339 Libraries

star icon0

5

guardian

186 Libraries

star icon8812

6

knoldus

160 Libraries

star icon814

7

xuwei-k

114 Libraries

star icon730

8

softprops

110 Libraries

star icon919

9

sbt

97 Libraries

star icon14558

10

p.baryshnikov

84 Libraries

star icon0

1

729 Libraries

star icon6

2

711 Libraries

star icon756

3

383 Libraries

star icon0

4

339 Libraries

star icon0

5

186 Libraries

star icon8812

6

160 Libraries

star icon814

7

114 Libraries

star icon730

8

110 Libraries

star icon919

9

97 Libraries

star icon14558

10

84 Libraries

star icon0

Trending Kits in Scala

Beautiful Soup is a Python library developed to ease web scraping and parsing HTML and XML. It was created by Leonard Richardson in 2004 as a response to the need for an effective for extracting data. At the time of its start, web scraping was a tough task due to the lack of standardized ways to extract data. Richardson set out to create a library for parsing HTML and XML documents. It makes it easier for developers to extract relevant information from web pages.   

Features of Beautiful Soup:   

Beautiful Soup is known for its capabilities in parsing HTML and XML documents. It is important to note that Beautiful Soup does not provide data analysis. The focus is on extracting and navigating through data from web pages. Beautiful Soup can be used with other Python libraries.   

  1. Parsing and Navigating HTML/XML: Beautiful Soup in parsing and navigating to HTML and XML. It provides a simple and intuitive API to traverse the structures to find elements. It's based on tags, attributes, or CSS selectors and attributes and text.   
  2. Integration with Data Analysis Libraries: Beautiful Soup is used in other libraries. Those are pandas, NumPy, and matplotlib, to perform data analysis tasks.   
  3. Handling Malformed HTML: Web pages often contain malformed HTML with inconsistencies or errors. Beautiful Soup is designed to handle such cases by employing lenient parsing. It can parse and extract data from imperfect HTML, making it a robust tool for web scraping tasks.   

 

Beautiful Soup plays a role in data analysis, the range of features, and the interface. Its ability to parse and navigate HTML and XML documents and its powerful data. That makes it an indispensable tool for web scraping and data extraction. As data plays a role in the industry, Beautiful Soup tool for extracting and manipulating. Beautiful Soup's features and simplicity make it for scraping and extracting. It's valuable information from the vast expanse of the internet.   

Fig: Preview of the output that you will get on running this code from your IDE.

Code

In this solution we are using Beautiful Soup library of Python.

Instructions


Follow the steps carefully to get the output easily.


  1. Download and Install the PyCharm Community Edition on your computer.
  2. Open the terminal and install the required libraries with the following commands.
  3. Install Tkinter - pip install Tkinter.
  4. Create a new Python file on your IDE.
  5. Copy the snippet using the 'copy' button and paste it into your Python file.
  6. Remove 17 to 33 lines from the code.
  7. Run the current file to generate the output.


I hope you found this useful.


I found this code snippet by searching for ' Beautiful Soup - How to get text using Beautiful soup in python?' You can try any such use case!

Environment Tested


I tested this solution in the following versions. Be mindful of changes when working with other versions.

  1. PyCharm Community Edition 2023.3.1
  2. The solution is created in Python 3.8 Version
  3. Beautiful Soup v4.9.3 


Using this solution, we can be able to extract text from HTML elements using Beautiful Soup Python with simple steps. This process also facilities an easy way to use, hassle-free method to create a hands-on working version of code which would help us to extract text from HTML elements using Beautiful Soup Python.

Dependent Library


You can search for any dependent library on kandi like ' Scala-Scraper'.

Support


  1. For any support on kandi solution kits, please use the chat
  2. For further learning resources, visit the Open Weaver Community learning page


FAQ:   

1. What is an HTML parser library, and how does it work?   

An HTML parser library is a software tool to parse and process HTML documents. It provides functions and classes that help the extraction and manipulation.   

 

2. What is the difference between parsing an XML document and a web page using Beautiful Soup?   

There are some main differences between an XML document and a web page. Those differences stem from the varying characteristics and structures of XML and HTML.   

  • Parser Selection: Beautiful Soup offers different parsers for XML and HTML. For parsing XML, Beautiful Soup relies on the XML parser. It is a third-party library known for its speed and compliance with XML standards. So, for HTML parsing, Beautiful Soup provides a parser, including XML and HTML parsers.   
  • Document Structure: XML and HTML have different document structures. XML is a markup language that helps transport data, whereas HTML a used for structuring. XML documents follow a hierarchical structure defined by user-defined tags. HTML documents have a predefined structure with tags specific to web pages.   
  • Element Selection: XML and HTML documents use different tags and attributes. It parses an XML document and allows one to locate elements on their tag names, attributes. HTML parsing offers more flexibility in element selection. Beautiful Soup supports CSS selectors. These are used for selecting HTML elements based on classes, IDs, and attribute values.   

  

3. How do you create a parse tree with the Beautiful Soup 4 source tarball?   

To create a parse tree with the Beautiful Soup 4 source tarball. You first need to install Beautiful Soup and then use its parsing capabilities.  

  • Install Beautiful Soup: Before you start, ensure you have Beautiful Soup installed. If you still need to install it, you can use pip.  
  • Download the Beautiful Soup 4 source tarball: Go to The Beautiful website and download it.   
  • Extract the tarball: Extract the contents of the downloaded to your computer.  
  • Create a Python script to parse the tarball: Create a script using the text editor and import it.   
  • Read the contents of the tarball: Read the contents of it using Python built-in tarfile.  
  • Create a Beautiful Soup object: Create a Beautiful Soup object by passing the content.   
  • Use the parse tree: The Beautiful Soup object, its methods to navigate and manipulate the parse it. For example, you can find specific tags, extract data, or change the contents.   


4. How can I use the Beautiful Soup search API to extract text from specific elements on a web page?  

 With the Beautiful Soup search API, you can extract text from specific elements on a web page. The search API lets you find elements based on their tag names, attributes, and CSS classes. When using the search API, be aware that some elements may not be present on the web page or may contain any text.   

  

5. How do I use the Beautiful Soup to navigate through an HTML document for text extraction?   

To navigate through an HTML document and extract the text, you must create it. Its methods to traverse the document's elements. The Beautiful Soup inputs the HTML content and creates a parse tree. You can then navigate to find specific elements and extract their text. The Beautiful Soup to extract text, be aware that some elements may not be present in an HTML document.  

Trending Discussions on Scala

spark-shell throws java.lang.reflect.InvocationTargetException on running

Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

How can I use :~: to determine type equality in Haskell?

NoSuchMethodError on com.fasterxml.jackson.dataformat.xml.XmlMapper.coercionConfigDefaults()

Exhaustive pattern matching when deprecated sealed trait instance present

How to check Databricks cluster for Log4J vulnerability?

How to create a general method for Scala 3 enums

Sbt-native-packager cannot connect to Docker daemon

Spark: unable to load native-hadoop library for platform

Use Scala with Azure Functions

QUESTION

spark-shell throws java.lang.reflect.InvocationTargetException on running

Asked 2022-Apr-01 at 19:53

When I execute run-example SparkPi, for example, it works perfectly, but when I run spark-shell, it throws these exceptions:

1WARNING: An illegal reflective access operation has occurred
2WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/C:/big_data/spark-3.2.0-bin-hadoop3.2-scala2.13/jars/spark-unsafe_2.13-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
3WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
4WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
5WARNING: All illegal access operations will be denied in a future release
6Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
7Setting default log level to "WARN".
8To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
9Welcome to
10      ____              __
11     / __/__  ___ _____/ /__
12    _\ \/ _ \/ _ `/ __/  '_/
13   /___/ .__/\_,_/_/ /_/\_\   version 3.2.0
14      /_/
15
16Using Scala version 2.13.5 (OpenJDK 64-Bit Server VM, Java 11.0.9.1)
17Type in expressions to have them evaluated.
18Type :help for more information.
1921/12/11 19:28:36 ERROR SparkContext: Error initializing SparkContext.
20java.lang.reflect.InvocationTargetException
21        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
22        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
23        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
24        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
25        at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
26        at org.apache.spark.executor.Executor.<init>(Executor.scala:160)
27        at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64)
28        at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
29        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
30        at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
31        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
32        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
33        at scala.Option.getOrElse(Option.scala:201)
34        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
35        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
36        at $line3.$read$$iw.<init>(<console>:5)
37        at $line3.$read.<init>(<console>:4)
38        at $line3.$read$.<clinit>(<console>)
39        at $line3.$eval$.$print$lzycompute(<synthetic>:6)
40        at $line3.$eval$.$print(<synthetic>:5)
41        at $line3.$eval.$print(<synthetic>)
42        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
43        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
44        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
45        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
46        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
47        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
48        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
49        at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
50        at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
51        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
52        at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
53        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
54        at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
55        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
56        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
57        at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
58        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
59        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
60        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
61        at scala.collection.immutable.List.foreach(List.scala:333)
62        at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
63        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
64        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
65        at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
66        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
67        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
68        at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
69        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
70        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
71        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
72        at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
73        at org.apache.spark.repl.Main$.doMain(Main.scala:84)
74        at org.apache.spark.repl.Main$.main(Main.scala:59)
75        at org.apache.spark.repl.Main.main(Main.scala)
76        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
77        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
78        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
79        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
80        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
81        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
82        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
83        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
84        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
85        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
86        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
87        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
88Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
89        at java.base/java.net.URI$Parser.fail(URI.java:2913)
90        at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
91        at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
92        at java.base/java.net.URI$Parser.parse(URI.java:3114)
93        at java.base/java.net.URI.<init>(URI.java:600)
94        at org.apache.spark.repl.ExecutorClassLoader.<init>(ExecutorClassLoader.scala:57)
95        ... 67 more
9621/12/11 19:28:36 ERROR Utils: Uncaught exception in thread main
97java.lang.NullPointerException
98        at org.apache.spark.scheduler.local.LocalSchedulerBackend.org$apache$spark$scheduler$local$LocalSchedulerBackend$$stop(LocalSchedulerBackend.scala:173)
99        at org.apache.spark.scheduler.local.LocalSchedulerBackend.stop(LocalSchedulerBackend.scala:144)
100        at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
101        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2516)
102        at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2086)
103        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1442)
104        at org.apache.spark.SparkContext.stop(SparkContext.scala:2086)
105        at org.apache.spark.SparkContext.<init>(SparkContext.scala:677)
106        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
107        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
108        at scala.Option.getOrElse(Option.scala:201)
109        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
110        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
111        at $line3.$read$$iw.<init>(<console>:5)
112        at $line3.$read.<init>(<console>:4)
113        at $line3.$read$.<clinit>(<console>)
114        at $line3.$eval$.$print$lzycompute(<synthetic>:6)
115        at $line3.$eval$.$print(<synthetic>:5)
116        at $line3.$eval.$print(<synthetic>)
117        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
118        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
119        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
120        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
121        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
122        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
123        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
124        at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
125        at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
126        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
127        at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
128        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
129        at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
130        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
131        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
132        at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
133        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
134        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
135        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
136        at scala.collection.immutable.List.foreach(List.scala:333)
137        at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
138        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
139        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
140        at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
141        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
142        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
143        at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
144        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
145        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
146        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
147        at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
148        at org.apache.spark.repl.Main$.doMain(Main.scala:84)
149        at org.apache.spark.repl.Main$.main(Main.scala:59)
150        at org.apache.spark.repl.Main.main(Main.scala)
151        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
152        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
153        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
154        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
155        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
156        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
157        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
158        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
159        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
160        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
161        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
162        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16321/12/11 19:28:36 WARN MetricsSystem: Stopping a MetricsSystem that is not running
16421/12/11 19:28:36 ERROR Main: Failed to initialize Spark session.
165java.lang.reflect.InvocationTargetException
166        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
167        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
168        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
169        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
170        at org.apache.spark.executor.Executor.addReplClassLoaderIfNeeded(Executor.scala:909)
171        at org.apache.spark.executor.Executor.<init>(Executor.scala:160)
172        at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64)
173        at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
174        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
175        at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
176        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
177        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
178        at scala.Option.getOrElse(Option.scala:201)
179        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
180        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:114)
181        at $line3.$read$$iw.<init>(<console>:5)
182        at $line3.$read.<init>(<console>:4)
183        at $line3.$read$.<clinit>(<console>)
184        at $line3.$eval$.$print$lzycompute(<synthetic>:6)
185        at $line3.$eval$.$print(<synthetic>:5)
186        at $line3.$eval.$print(<synthetic>)
187        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
188        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
189        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
190        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
191        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:670)
192        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1006)
193        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$1(IMain.scala:506)
194        at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
195        at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
196        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:43)
197        at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:505)
198        at scala.tools.nsc.interpreter.IMain.$anonfun$doInterpret$3(IMain.scala:519)
199        at scala.tools.nsc.interpreter.IMain.doInterpret(IMain.scala:519)
200        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
201        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:501)
202        at scala.tools.nsc.interpreter.IMain.$anonfun$quietRun$1(IMain.scala:216)
203        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
204        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:216)
205        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$interpretPreamble$1(ILoop.scala:924)
206        at scala.collection.immutable.List.foreach(List.scala:333)
207        at scala.tools.nsc.interpreter.shell.ILoop.interpretPreamble(ILoop.scala:924)
208        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$3(ILoop.scala:963)
209        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
210        at scala.tools.nsc.interpreter.shell.ILoop.echoOff(ILoop.scala:90)
211        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$2(ILoop.scala:963)
212        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
213        at scala.tools.nsc.interpreter.IMain.withSuppressedSettings(IMain.scala:1406)
214        at scala.tools.nsc.interpreter.shell.ILoop.$anonfun$run$1(ILoop.scala:954)
215        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
216        at scala.tools.nsc.interpreter.shell.ReplReporterImpl.withoutPrintingResults(Reporter.scala:64)
217        at scala.tools.nsc.interpreter.shell.ILoop.run(ILoop.scala:954)
218        at org.apache.spark.repl.Main$.doMain(Main.scala:84)
219        at org.apache.spark.repl.Main$.main(Main.scala:59)
220        at org.apache.spark.repl.Main.main(Main.scala)
221        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
222        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
223        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
224        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
225        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
226        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
227        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
228        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
229        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
230        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
231        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
232        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
233Caused by: java.net.URISyntaxException: Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes
234        at java.base/java.net.URI$Parser.fail(URI.java:2913)
235        at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
236        at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
237        at java.base/java.net.URI$Parser.parse(URI.java:3114)
238        at java.base/java.net.URI.<init>(URI.java:600)
239        at org.apache.spark.repl.ExecutorClassLoader.<init>(ExecutorClassLoader.scala:57)
240        ... 67 more
24121/12/11 19:28:36 ERROR Utils: Uncaught exception in thread shutdown-hook-0
242java.lang.ExceptionInInitializerError
243        at org.apache.spark.executor.Executor.stop(Executor.scala:333)
244        at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
245        at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
246        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
247        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
248        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
249        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
250        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
251        at scala.util.Try$.apply(Try.scala:210)
252        at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
253        at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
254        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
255        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
256        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
257        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
258        at java.base/java.lang.Thread.run(Thread.java:829)
259Caused by: java.lang.NullPointerException
260        at org.apache.spark.shuffle.ShuffleBlockPusher$.<clinit>(ShuffleBlockPusher.scala:465)
261        ... 16 more
26221/12/11 19:28:36 WARN ShutdownHookManager: ShutdownHook '' failed, java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
263java.util.concurrent.ExecutionException: java.lang.ExceptionInInitializerError
264        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
265        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
266        at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
267        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
268Caused by: java.lang.ExceptionInInitializerError
269        at org.apache.spark.executor.Executor.stop(Executor.scala:333)
270        at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
271        at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
272        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
273        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
274        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2019)
275        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
276        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
277        at scala.util.Try$.apply(Try.scala:210)
278        at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
279        at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
280        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
281        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
282        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
283        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
284        at java.base/java.lang.Thread.run(Thread.java:829)
285Caused by: java.lang.NullPointerException
286        at org.apache.spark.shuffle.ShuffleBlockPusher$.<clinit>(ShuffleBlockPusher.scala:465)
287        ... 16 more
288

As I can see it caused by Illegal character in path at index 42: spark://DESKTOP-JO73CF4.mshome.net:2103/C:\classes, but I don't understand what does it mean exactly and how to deal with that

How can I solve this problem?

I use Spark 3.2.0 Pre-built for Apache Hadoop 3.3 and later (Scala 2.13)

JAVA_HOME, HADOOP_HOME, SPARK_HOME path variables are set.

ANSWER

Answered 2022-Jan-07 at 15:11

i face the same problem, i think Spark 3.2 is the problem itself

switched to Spark 3.1.2, it works fine

Source https://stackoverflow.com/questions/70317481

QUESTION

Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

Asked 2022-Mar-24 at 12:28

I am new to kafka and zookepper, and I am trying to create a topic, but I am getting this error -

1Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option
2        at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
3        at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)
4        at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
5        at joptsimple.OptionParser.parse(OptionParser.java:396)
6        at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:517)
7        at kafka.admin.TopicCommand$.main(TopicCommand.scala:47)
8        at kafka.admin.TopicCommand.main(TopicCommand.scala)
9

I am using this command to create the topic -

1Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option
2        at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
3        at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)
4        at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
5        at joptsimple.OptionParser.parse(OptionParser.java:396)
6        at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:517)
7        at kafka.admin.TopicCommand$.main(TopicCommand.scala:47)
8        at kafka.admin.TopicCommand.main(TopicCommand.scala)
9.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partions 1 --topic TestTopic
10

ANSWER

Answered 2021-Sep-30 at 14:52

Read the official Kafka documentation for the version you downloaded, and not some other blog/article that you might have copied the command from

zookeeper is almost never used for CLI commands in current versions

If you run bin\kafka-topics on its own with --help or no options, then it'll print the help messaging that shows all available arguments.

Source https://stackoverflow.com/questions/69297020

QUESTION

How can I use :~: to determine type equality in Haskell?

Asked 2022-Mar-16 at 09:24

I'm trying to use :~: from Data.Type.Equality to determine type equality at compile time. My expectation is that it behaves along the line of Scala's standard way of determining type equality:

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5

So I tried

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5foo :: Int :~: Bool
6foo = undefined
7

which I would expect to fail since Int :~: Bool is inhabitable. But it compiles just fine.

  • How is :~: supposed to work? The documentation of Data.Type.Equality is pretty impenetrable to me and I was not able to find a succinct example anywhere.

  • Maybe I'm completely off the track. In that case, how could I achieve the semantics of the Equals example in Haskell?

ANSWER

Answered 2022-Mar-16 at 09:24

Much as we Haskellers often pretend otherwise, every normal1 type in Haskell is inhabited. That includes data Void, and it includes a :~: b for all a and b. Besides the polite values we usually acknowledge, there is also the bottom value.

undefined :: a is one way of producing the bottom value in any type a. So in particular undefined :: Int :~: Bool, and thus your code is perfectly type correct.

If you want a type equality that simply fails to compile if the equality can't be proved at compile time, then you want a type equality constraint (which is the ~ operator), not the :~: type. You use that like this:

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5foo :: Int :~: Bool
6foo = undefined
7foo :: Int ~ Int => ()   -- compiles fine
8foo = ()
9
10bar :: Int ~ Bool => ()  -- does technically compile
11bar = ()
12

bar compiles only because constraints are assumed in the body of a function that has the constraints. But any attempt to call bar requires the compiler is able to prove the constraints in order to compile the call. So this fails:

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5foo :: Int :~: Bool
6foo = undefined
7foo :: Int ~ Int => ()   -- compiles fine
8foo = ()
9
10bar :: Int ~ Bool => ()  -- does technically compile
11bar = ()
12baz :: ()
13baz = bar
14

:~:, however, is not a constraint (it doesn't go left of the => arrow in types), but an ordinary type. a :~: b is the type of values that serve as a runtime proof that the type a is equal to the type b. If in fact they are not equal, your program won't fail to compile just before you expressed the type a :~: b; rather you just won't be able to actually come up with a value of that type (other than bottom).

In particular, a :~: b is a type that has a data constructor: Refl. To usefully use it, you usually require a a :~: b as an argument. and then pattern match on it. Within the scope of the pattern match (i.e. the body of a case statement), the compiler will use the assumption that the two types are equal. Since pattern matching on the bottom value will never succeed (it might throw an exception, or it might compute forever), the fact that you can always provide bottom as a "proof" that a :~: b doesn't actually cause huge problems; you can lie to the compiler, but it will never execute code that depended on your lie.

Examples corresponding to those in the OP would be:

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5foo :: Int :~: Bool
6foo = undefined
7foo :: Int ~ Int => ()   -- compiles fine
8foo = ()
9
10bar :: Int ~ Bool => ()  -- does technically compile
11bar = ()
12baz :: ()
13baz = bar
14foo :: Int :~: Int -> ()
15foo proof
16  = case proof of
17      Refl  -> ()
18
19bar :: Int :~: Bool -> ()
20bar proof
21  = case proof of
22      Refl  -> ()
23

bar can exist even though it needs an impossible proof. We can even call bar with something like bar undefined, making use of the bottom value in the type Int :~: Bool. This won't be detected as an error at compile time, but it will throw a runtime exception (if it's actually evaluated; lazy evaluation might avoid the error). Whereas foo can simply be called with foo Refl.

:~: (and ~) is of course much more usefully used when the two types are (or contain) variables, rather than simple concrete types like Int and Bool. It's also frequently combined with something like Maybe so you have a way of expressing when the types are not proven to be equal. A slightly less trivial example would be:

1case class Equals[A >: B <:B , B]()
2
3Equals[Int, Int]     // compiles fine
4Equals[String, Int]  // doesn't compile
5foo :: Int :~: Bool
6foo = undefined
7foo :: Int ~ Int => ()   -- compiles fine
8foo = ()
9
10bar :: Int ~ Bool => ()  -- does technically compile
11bar = ()
12baz :: ()
13baz = bar
14foo :: Int :~: Int -> ()
15foo proof
16  = case proof of
17      Refl  -> ()
18
19bar :: Int :~: Bool -> ()
20bar proof
21  = case proof of
22      Refl  -> ()
23strange :: Maybe (a :~: b) -> a -> b -> [a]
24strange Nothing x _ = [x]
25strange (Just Refl) x y
26  = [x, y]
27

strange takes a maybe-proof that the types a and b are equal, and a value of each. If the maybe is Nothing, then the types might not be equal, so we can only put x in the list of a. If we get Just Refl, however, then a and b are actually the same type (inside the pattern match only!), so it's valid to put x and y in the same list.

But this does show a feature of :~: that cannot be achieved with ~. We can still call strange even when we want to pass it two values of different types; we just in that case are forced to pass Nothing as the first value (or Just undefined, but that won't get us anything useful). It allows us to write code that contemplates that a and b could be equal, without forcing a compilation failure if they actually aren't. Whereas a ~ b (in the constraints) would only allow us to require that they definitely are equal, and provably so at compile time.


1 Where "normal type" means a member of the kind Type AKA *.

Source https://stackoverflow.com/questions/71493869

QUESTION

NoSuchMethodError on com.fasterxml.jackson.dataformat.xml.XmlMapper.coercionConfigDefaults()

Asked 2022-Feb-09 at 12:31

I'm parsing a XML string to convert it to a JsonNode in Scala using a XmlMapper from the Jackson library. I code on a Databricks notebook, so compilation is done on a cloud cluster. When compiling my code I got this error java.lang.NoSuchMethodError: com.fasterxml.jackson.dataformat.xml.XmlMapper.coercionConfigDefaults()Lcom/fasterxml/jackson/databind/cfg/MutableCoercionConfig; with a hundred lines of "at com.databricks. ..."

I maybe forget to import something but for me this is ok (tell me if I'm wrong) :

1import ch.qos.logback.classic._
2import com.typesafe.scalalogging._
3import com.fasterxml.jackson._
4import com.fasterxml.jackson.core._
5import com.fasterxml.jackson.databind.{ObjectMapper, JsonNode}
6import com.fasterxml.jackson.dataformat.xml._
7import com.fasterxml.jackson.module.scala._
8import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
9import java.io._
10import java.time.Instant
11import java.util.concurrent.TimeUnit
12import javax.xml.parsers._
13import okhttp3.{Headers, OkHttpClient, Request, Response, RequestBody, FormBody}
14import okhttp3.OkHttpClient.Builder._
15import org.apache.spark._
16import org.xml.sax._
17

As I'm using Databricks, there's no SBT file for dependencies. Instead I installed the libs I need directly on the cluster. Here are the ones I'm using :

1import ch.qos.logback.classic._
2import com.typesafe.scalalogging._
3import com.fasterxml.jackson._
4import com.fasterxml.jackson.core._
5import com.fasterxml.jackson.databind.{ObjectMapper, JsonNode}
6import com.fasterxml.jackson.dataformat.xml._
7import com.fasterxml.jackson.module.scala._
8import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
9import java.io._
10import java.time.Instant
11import java.util.concurrent.TimeUnit
12import javax.xml.parsers._
13import okhttp3.{Headers, OkHttpClient, Request, Response, RequestBody, FormBody}
14import okhttp3.OkHttpClient.Builder._
15import org.apache.spark._
16import org.xml.sax._
17com.squareup.okhttp:okhttp:2.7.5
18com.squareup.okhttp3:okhttp:4.9.0
19com.squareup.okhttp3:okhttp:3.14.9
20org.scala-lang.modules:scala-swing_3:3.0.0
21ch.qos.logback:logback-classic:1.2.6
22com.typesafe:scalalogging-slf4j_2.10:1.1.0
23cc.spray.json:spray-json_2.9.1:1.0.1
24com.fasterxml.jackson.module:jackson-module-scala_3:2.13.0
25javax.xml.parsers:jaxp-api:1.4.5
26org.xml.sax:2.0.1
27

The code causing the error is simply (coming from here : https://www.baeldung.com/jackson-convert-xml-json Chapter 5):

1import ch.qos.logback.classic._
2import com.typesafe.scalalogging._
3import com.fasterxml.jackson._
4import com.fasterxml.jackson.core._
5import com.fasterxml.jackson.databind.{ObjectMapper, JsonNode}
6import com.fasterxml.jackson.dataformat.xml._
7import com.fasterxml.jackson.module.scala._
8import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
9import java.io._
10import java.time.Instant
11import java.util.concurrent.TimeUnit
12import javax.xml.parsers._
13import okhttp3.{Headers, OkHttpClient, Request, Response, RequestBody, FormBody}
14import okhttp3.OkHttpClient.Builder._
15import org.apache.spark._
16import org.xml.sax._
17com.squareup.okhttp:okhttp:2.7.5
18com.squareup.okhttp3:okhttp:4.9.0
19com.squareup.okhttp3:okhttp:3.14.9
20org.scala-lang.modules:scala-swing_3:3.0.0
21ch.qos.logback:logback-classic:1.2.6
22com.typesafe:scalalogging-slf4j_2.10:1.1.0
23cc.spray.json:spray-json_2.9.1:1.0.1
24com.fasterxml.jackson.module:jackson-module-scala_3:2.13.0
25javax.xml.parsers:jaxp-api:1.4.5
26org.xml.sax:2.0.1
27val xmlMapper: XmlMapper = new XmlMapper()
28val jsonNode: JsonNode = xmlMapper.readTree(responseBody.getBytes())
29

with responseBody being a String containing a XML document (I previously checked the integrity of the XML). When removing those two lines the code is working fine.

I've read tons of articles or forums but I can't figure out what's causing my issue. Can someone please help me ? Thanks a lot ! :)

ANSWER

Answered 2021-Oct-07 at 12:08

Welcome to dependency hell and breaking changes in libraries.

This usually happens, when various lib bring in different version of same lib. In this case it is Jackson. java.lang.NoSuchMethodError: com.fasterxml.jackson.dataformat.xml.XmlMapper.coercionConfigDefaults()Lcom/fasterxml/jackson/databind/cfg/MutableCoercionConfig; means: One lib probably require Jackson version, which has this method, but on class path is version, which does not yet have this funcion or got removed bcs was deprecated or renamed.

In case like this is good to print dependency tree and check version of Jackson required in libs. And if possible use newer versions of requid libs.

Solution: use libs, which use compatible versions of Jackson lib. No other shortcut possible.

Source https://stackoverflow.com/questions/69480470

QUESTION

Exhaustive pattern matching when deprecated sealed trait instance present

Asked 2022-Jan-04 at 12:06

Suppose the following scenario

1sealed trait Status
2
3case object Active extends Status
4case object Inactive extends Status
5@scala.deprecated("deprecated because reasons")
6case object Disabled extends Status /
7

Considering Disabled object cannot be removed and given val status: Status = getStatus there is either one of this problems:

  1. Fails with match not exhaustive:
1sealed trait Status
2
3case object Active extends Status
4case object Inactive extends Status
5@scala.deprecated("deprecated because reasons")
6case object Disabled extends Status /
7status match {
8  case Active => ???
9  case Inactive => ???
10}
11
  1. Fails with deprecated value being used
1sealed trait Status
2
3case object Active extends Status
4case object Inactive extends Status
5@scala.deprecated("deprecated because reasons")
6case object Disabled extends Status /
7status match {
8  case Active => ???
9  case Inactive => ???
10}
11status match {
12  case Active => ???
13  case Inactive => ???
14  case Disabled => ???
15}
16
  1. Losing compile time safety
1sealed trait Status
2
3case object Active extends Status
4case object Inactive extends Status
5@scala.deprecated("deprecated because reasons")
6case object Disabled extends Status /
7status match {
8  case Active => ???
9  case Inactive => ???
10}
11status match {
12  case Active => ???
13  case Inactive => ???
14  case Disabled => ???
15}
16status match {
17  case Active => ???
18  case Inactive => ???
19  case _ => ???
20}
21

Can type-safety exhaustive match be achieved in this scenario?

ANSWER

Answered 2022-Jan-03 at 14:16

I think option 2 is the way to go. But to make it work you have to disable the warning selectively. This is supported starting with Scala 2.13.2 and 2.12.13

1sealed trait Status
2
3case object Active extends Status
4case object Inactive extends Status
5@scala.deprecated("deprecated because reasons")
6case object Disabled extends Status /
7status match {
8  case Active => ???
9  case Inactive => ???
10}
11status match {
12  case Active => ???
13  case Inactive => ???
14  case Disabled => ???
15}
16status match {
17  case Active => ???
18  case Inactive => ???
19  case _ => ???
20}
21@annotation.nowarn("cat=deprecation")
22def someMethod(s: Status) = s match {
23  case Active   => "Active"
24  case Inactive => "Inactive"
25  case Disabled => "Disabled"
26}
27
28

Source https://stackoverflow.com/questions/70565944

QUESTION

How to check Databricks cluster for Log4J vulnerability?

Asked 2021-Dec-14 at 20:03

I'm using a Databricks cluster version 7.3 LTS with Scala 2.12. This version does use Log4J.

The official docs say that it uses Log4J version 1.2.17. Does this mean I do not have this vulnerability? And if I do, can I manually patch it on the cluster or do I need to upgrade the cluster to the next LTS version?

ANSWER

Answered 2021-Dec-13 at 17:00

As you wrote most Databricks clusters use 1.2.17 so it is different version and version affected by vulnerability is not used by Databricks.

Only one problem is when you install different version by yourself on the cluster. Even when you installed affected version you can mitigate the problem by setting Spark config in cluster advanced config as below:

1spark.driver.extraJavaOptions "-Dlog4j2.formatMsgNoLookups=true"
2spark.executor.extraJavaOptions "-Dlog4j2.formatMsgNoLookups=true"
3

Source https://stackoverflow.com/questions/70332805

QUESTION

How to create a general method for Scala 3 enums

Asked 2021-Nov-28 at 15:43

I want to have a simple enumDescr function for any Scala 3 enum.

Example:

1  @description(enumDescr(InvoiceCategory))
2  enum InvoiceCategory:
3    case `Travel Expenses`
4    case Misc
5    case `Software License Costs`
6

In Scala 2 this is simple (Enumeration):

1  @description(enumDescr(InvoiceCategory))
2  enum InvoiceCategory:
3    case `Travel Expenses`
4    case Misc
5    case `Software License Costs`
6def enumDescr(enum: Enumeration): String =
7  s"$enum: ${enum.values.mkString(", ")}"
8

But how is it done in Scala 3:

1  @description(enumDescr(InvoiceCategory))
2  enum InvoiceCategory:
3    case `Travel Expenses`
4    case Misc
5    case `Software License Costs`
6def enumDescr(enum: Enumeration): String =
7  s"$enum: ${enum.values.mkString(", ")}"
8def enumDescr(enumeration: ??) = ...
9

ANSWER

Answered 2021-Oct-27 at 23:45

I don't see any common trait shared by all enum companion objects.

You still can invoke the values reflectively, though:

1  @description(enumDescr(InvoiceCategory))
2  enum InvoiceCategory:
3    case `Travel Expenses`
4    case Misc
5    case `Software License Costs`
6def enumDescr(enum: Enumeration): String =
7  s"$enum: ${enum.values.mkString(", ")}"
8def enumDescr(enumeration: ??) = ...
9import reflect.Selectable.reflectiveSelectable
10
11def descrEnum(e: { def values: Array[?] }) = e.values.mkString(",")
12
13enum Foo:
14  case Bar
15  case Baz
16
17descrEnum(Foo) // "Bar,Baz"
18

Source https://stackoverflow.com/questions/69743692

QUESTION

Sbt-native-packager cannot connect to Docker daemon

Asked 2021-Nov-01 at 22:24

Here is my configuration which worked for more than one year but suddenly stopped working.

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: ""
4
5
6  stage: deploy
7  image: "hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4"
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21

The error in GitlabCI is the following:

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: ""
4
5
6  stage: deploy
7  image: "hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4"
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21[warn] [1] sbt-native-packager wasn't able to identify the docker version. Some features may not be enabled
22[warn] sbt-native packager tries to parse the `docker version` output. This can fail if
23[warn] 
24[warn]   - the output has changed:
25[warn]     $ docker version --format '{{.Server.APIVersion}}'
26[warn] 
27[warn]   - no `docker` executable is available
28[warn]     $ which docker
29[warn] 
30[warn]   - you have not the required privileges to run `docker`
31[warn] 
32[warn] You can display the parsed docker version in the sbt console with:
33[warn] 
34[warn]   sbt:your-project> show dockerApiVersion
35[warn] 
36[warn] As a last resort you could hard code the docker version, but it's not recommended!!
37[warn] 
38[warn]   import com.typesafe.sbt.packager.docker.DockerApiVersion
39[warn]   dockerApiVersion := Some(DockerApiVersion(1, 40))
40[warn]           
41[success] All package validations passed
42[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
43[info] Removing intermediate image(s) (labeled "snp-multi-stage-id=9da90b0c-75e0-4f46-98eb-a17a1998a3b8") 
44[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
45[error] Something went wrong while removing multi-stage intermediate image(s)
46[error] java.lang.RuntimeException: Nonzero exit value: 1
47[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.publishLocalDocker(DockerPlugin.scala:687)
48[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41(DockerPlugin.scala:266)
49[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41$adapted(DockerPlugin.scala:258)
50[error]     at scala.Function1.$anonfun$compose$1(Function1.scala:49)
51[error]     at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62)
52[error]     at sbt.std.Transform$$anon$4.work(Transform.scala:68)
53[error]     at sbt.Execute.$anonfun$submit$2(Execute.scala:282)
54[error]     at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
55[error]     at sbt.Execute.work(Execute.scala:291)
56[error]     at sbt.Execute.$anonfun$submit$1(Execute.scala:282)
57[error]     at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265)
58[error]     at sbt.CompletionService$$anon$2.call(CompletionService.scala:64)
59[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
60[error]     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
61[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
62[error]     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
63[error]     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
64[error]     at java.base/java.lang.Thread.run(Thread.java:834)
65[error] (Docker / publishLocal) Nonzero exit value: 1
66

ANSWER

Answered 2021-Aug-16 at 16:16

It looks like you trying to run the docker daemon inside your build image docker run.

For Linux, you need to make sure that the current user (the one running sbt), has the proper permissions to run docker commands with some post-install steps.

Maybe you could fix your script by running sudo sbt docker:publishLocal instead?

It is more common now to use a service to have a docker daemon already set up for your builds:

1variables:
2  DOCKER_DRIVER: overlay2
3  DOCKER_TLS_CERTDIR: ""
4
5
6  stage: deploy
7  image: "hseeberger/scala-sbt:11.0.9.1_1.4.4_2.13.4"
8  before_script:
9    - apt-get update
10    - apt-get install sudo
11    - apt-get install apt-transport-https ca-certificates curl software-properties-common -y
12    - curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
13    - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
14    - apt-get update
15    - apt-get install docker-ce -y
16    - sudo service docker start
17    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
18
19  script:
20    - sbt docker:publishLocal
21[warn] [1] sbt-native-packager wasn't able to identify the docker version. Some features may not be enabled
22[warn] sbt-native packager tries to parse the `docker version` output. This can fail if
23[warn] 
24[warn]   - the output has changed:
25[warn]     $ docker version --format '{{.Server.APIVersion}}'
26[warn] 
27[warn]   - no `docker` executable is available
28[warn]     $ which docker
29[warn] 
30[warn]   - you have not the required privileges to run `docker`
31[warn] 
32[warn] You can display the parsed docker version in the sbt console with:
33[warn] 
34[warn]   sbt:your-project> show dockerApiVersion
35[warn] 
36[warn] As a last resort you could hard code the docker version, but it's not recommended!!
37[warn] 
38[warn]   import com.typesafe.sbt.packager.docker.DockerApiVersion
39[warn]   dockerApiVersion := Some(DockerApiVersion(1, 40))
40[warn]           
41[success] All package validations passed
42[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
43[info] Removing intermediate image(s) (labeled "snp-multi-stage-id=9da90b0c-75e0-4f46-98eb-a17a1998a3b8") 
44[error] Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
45[error] Something went wrong while removing multi-stage intermediate image(s)
46[error] java.lang.RuntimeException: Nonzero exit value: 1
47[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.publishLocalDocker(DockerPlugin.scala:687)
48[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41(DockerPlugin.scala:266)
49[error]     at com.typesafe.sbt.packager.docker.DockerPlugin$.$anonfun$projectSettings$41$adapted(DockerPlugin.scala:258)
50[error]     at scala.Function1.$anonfun$compose$1(Function1.scala:49)
51[error]     at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62)
52[error]     at sbt.std.Transform$$anon$4.work(Transform.scala:68)
53[error]     at sbt.Execute.$anonfun$submit$2(Execute.scala:282)
54[error]     at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
55[error]     at sbt.Execute.work(Execute.scala:291)
56[error]     at sbt.Execute.$anonfun$submit$1(Execute.scala:282)
57[error]     at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265)
58[error]     at sbt.CompletionService$$anon$2.call(CompletionService.scala:64)
59[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
60[error]     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
61[error]     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
62[error]     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
63[error]     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
64[error]     at java.base/java.lang.Thread.run(Thread.java:834)
65[error] (Docker / publishLocal) Nonzero exit value: 1
66services:
67  - docker:dind
68

See this example on gitlab. There is also a section in the (EE) docs.

Source https://stackoverflow.com/questions/68683399

QUESTION

Spark: unable to load native-hadoop library for platform

Asked 2021-Oct-25 at 15:57

I am trying to start with Spark. I have Hadoop (3.3.1) and Spark (3.2.2) in my library. I have set the SPARK_HOME, PATH, HADOOP_HOME and LD_LIBRARY_PATH to their respective paths. I am also running JDK 17 (echo and -version work fine in the terminal).

Yet, I still get the following error:

1Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
2Setting default log level to "WARN".
3To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
421/10/25 17:17:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
5java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x1f508f09) cannot access class sun.nio.ch.DirectBuffer (in module java.base) because module java.base does not export sun.nio.ch to unnamed module @0x1f508f09
6  at org.apache.spark.storage.StorageUtils$.<init>(StorageUtils.scala:213)
7  at org.apache.spark.storage.StorageUtils$.<clinit>(StorageUtils.scala)
8  at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
9  at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
10  at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
11  at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
12  at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
13  at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
14  at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
15  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
16  at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
17  at scala.Option.getOrElse(Option.scala:189)
18  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
19  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
20  ... 55 elided
21<console>:14: error: not found: value spark
22       import spark.implicits._
23              ^
24<console>:14: error: not found: value spark
25       import spark.sql
26              ^
27Welcome to
28      ____              __
29     / __/__  ___ _____/ /__
30    _\ \/ _ \/ _ `/ __/  '_/
31   /___/ .__/\_,_/_/ /_/\_\   version 3.2.0
32      /_/
33         
34Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 17.0.1)
35Type in expressions to have them evaluated.
36Type :help for more information.
37
38

Any ideas how to fix this?

ANSWER

Answered 2021-Oct-25 at 15:41

Open your terminal and type this command --> gedit .bashrc

Ensure that you are added the native after lib as shown below

1Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
2Setting default log level to "WARN".
3To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
421/10/25 17:17:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
5java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x1f508f09) cannot access class sun.nio.ch.DirectBuffer (in module java.base) because module java.base does not export sun.nio.ch to unnamed module @0x1f508f09
6  at org.apache.spark.storage.StorageUtils$.<init>(StorageUtils.scala:213)
7  at org.apache.spark.storage.StorageUtils$.<clinit>(StorageUtils.scala)
8  at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
9  at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
10  at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
11  at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
12  at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
13  at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
14  at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
15  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
16  at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
17  at scala.Option.getOrElse(Option.scala:189)
18  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
19  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
20  ... 55 elided
21<console>:14: error: not found: value spark
22       import spark.implicits._
23              ^
24<console>:14: error: not found: value spark
25       import spark.sql
26              ^
27Welcome to
28      ____              __
29     / __/__  ___ _____/ /__
30    _\ \/ _ \/ _ `/ __/  '_/
31   /___/ .__/\_,_/_/ /_/\_\   version 3.2.0
32      /_/
33         
34Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 17.0.1)
35Type in expressions to have them evaluated.
36Type :help for more information.
37
38export HADOOP_OPTS = "-Djava.library.path=$HADOOP_HOME/lib/native"
39

and Save the file and type this command in terminal source ~/.bashrc

Try this it may help you.

Source https://stackoverflow.com/questions/69710694

QUESTION

Use Scala with Azure Functions

Asked 2021-Sep-06 at 18:09

Azure Functions currently supports the following languages: C#, JavaScript, F#, Java, Powershell, Python, Typescript. Scala is not on the list.

How can one use scala to write azure functions?

ANSWER

Answered 2021-Sep-06 at 13:21

Azure Functions supports Java. And it is pretty straight forward to make it working for scala:

1
2import com.microsoft.azure.functions.ExecutionContext
3import com.microsoft.azure.functions.HttpMethod
4import com.microsoft.azure.functions.HttpRequestMessage
5import com.microsoft.azure.functions.HttpResponseMessage
6import com.microsoft.azure.functions.HttpStatus
7import com.microsoft.azure.functions.annotation.AuthorizationLevel
8import com.microsoft.azure.functions.annotation.FunctionName
9import com.microsoft.azure.functions.annotation.HttpTrigger
10
11import java.util.Optional
12
13class ScalaFunction {
14  /**
15   * This function listens at endpoint "/api/ScalaFunction"
16   */
17  @FunctionName("ScalaFunction")
18  def run(
19           @HttpTrigger(
20             name = "req",
21             methods = Array(HttpMethod.GET, HttpMethod.POST),
22             authLevel = AuthorizationLevel.ANONYMOUS) request: HttpRequestMessage[Optional[String]],
23           context: ExecutionContext): HttpResponseMessage = {
24    context.getLogger.info("Scala HTTP trigger processed a request.")
25    request.createResponseBuilder(HttpStatus.OK).body("This is written in Scala. Hello, ").build
26  }
27}
28
29

And this is how the pom.xml looks. There is an added dependency for scala and maven-scala-plugin plugin.

1
2import com.microsoft.azure.functions.ExecutionContext
3import com.microsoft.azure.functions.HttpMethod
4import com.microsoft.azure.functions.HttpRequestMessage
5import com.microsoft.azure.functions.HttpResponseMessage
6import com.microsoft.azure.functions.HttpStatus
7import com.microsoft.azure.functions.annotation.AuthorizationLevel
8import com.microsoft.azure.functions.annotation.FunctionName
9import com.microsoft.azure.functions.annotation.HttpTrigger
10
11import java.util.Optional
12
13class ScalaFunction {
14  /**
15   * This function listens at endpoint "/api/ScalaFunction"
16   */
17  @FunctionName("ScalaFunction")
18  def run(
19           @HttpTrigger(
20             name = "req",
21             methods = Array(HttpMethod.GET, HttpMethod.POST),
22             authLevel = AuthorizationLevel.ANONYMOUS) request: HttpRequestMessage[Optional[String]],
23           context: ExecutionContext): HttpResponseMessage = {
24    context.getLogger.info("Scala HTTP trigger processed a request.")
25    request.createResponseBuilder(HttpStatus.OK).body("This is written in Scala. Hello, ").build
26  }
27}
28
29<?xml version="1.0" encoding="UTF-8" ?>
30<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
31    <modelVersion>4.0.0</modelVersion>
32
33    <groupId>com.jp</groupId>
34    <artifactId>AzurePrac</artifactId>
35    <version>1.0-SNAPSHOT</version>
36    <packaging>jar</packaging>
37
38    <name>Azure Java Functions</name>
39
40    <properties>
41        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
42        <java.version>1.8</java.version>
43        <azure.functions.maven.plugin.version>1.13.0</azure.functions.maven.plugin.version>
44        <azure.functions.java.library.version>1.4.2</azure.functions.java.library.version>
45        <functionAppName>azureprac-20210906165023338</functionAppName>
46    </properties>
47
48    <dependencies>
49        <dependency>
50            <groupId>com.microsoft.azure.functions</groupId>
51            <artifactId>azure-functions-java-library</artifactId>
52            <version>${azure.functions.java.library.version}</version>
53        </dependency>
54
55        <!-- Test -->
56        <dependency>
57            <groupId>org.junit.jupiter</groupId>
58            <artifactId>junit-jupiter</artifactId>
59            <version>5.4.2</version>
60            <scope>test</scope>
61        </dependency>
62
63        <dependency>
64            <groupId>org.mockito</groupId>
65            <artifactId>mockito-core</artifactId>
66            <version>2.23.4</version>
67            <scope>test</scope>
68        </dependency>
69
70        <dependency>
71            <groupId>org.scala-lang</groupId>
72            <artifactId>scala-library</artifactId>
73            <version>2.12.12</version>
74        </dependency>
75    </dependencies>
76
77    <build>
78        <plugins>
79            <plugin>
80                <groupId>org.apache.maven.plugins</groupId>
81                <artifactId>maven-compiler-plugin</artifactId>
82                <version>3.8.1</version>
83                <configuration>
84                    <source>${java.version}</source>
85                    <target>${java.version}</target>
86                    <encoding>${project.build.sourceEncoding}</encoding>
87                </configuration>
88            </plugin>
89            <plugin>
90                <groupId>com.microsoft.azure</groupId>
91                <artifactId>azure-functions-maven-plugin</artifactId>
92                <version>${azure.functions.maven.plugin.version}</version>
93                <configuration>
94                    <!-- function app name -->
95                    <appName>${functionAppName}</appName>
96                    <!-- function app resource group -->
97                    <resourceGroup>java-functions-group</resourceGroup>
98                    <!-- function app service plan name -->
99                    <appServicePlanName>java-functions-app-service-plan</appServicePlanName>
100                    <!-- function app region-->
101                    <!-- refers https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details#supported-regions for all valid values -->
102                    <region>westus</region>
103                    <!-- function pricingTier, default to be consumption if not specified -->
104                    <!-- refers https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details#supported-pricing-tiers for all valid values -->
105                    <!-- <pricingTier></pricingTier> -->
106                    <!-- Whether to disable application insights, default is false -->
107                    <!-- refers https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details for all valid configurations for application insights-->
108                    <!-- <disableAppInsights></disableAppInsights> -->
109                    <runtime>
110                        <!-- runtime os, could be windows, linux or docker-->
111                        <os>windows</os>
112                        <javaVersion>8</javaVersion>
113                    </runtime>
114                    <appSettings>
115                        <property>
116                            <name>FUNCTIONS_EXTENSION_VERSION</name>
117                            <value>~3</value>
118                        </property>
119                    </appSettings>
120                </configuration>
121                <executions>
122                    <execution>
123                        <id>package-functions</id>
124                        <goals>
125                            <goal>package</goal>
126                        </goals>
127                    </execution>
128                </executions>
129            </plugin>
130            <!--Remove obj folder generated by .NET SDK in maven clean-->
131            <plugin>
132                <artifactId>maven-clean-plugin</artifactId>
133                <version>3.1.0</version>
134                <configuration>
135                    <filesets>
136                        <fileset>
137                            <directory>obj</directory>
138                        </fileset>
139                    </filesets>
140                </configuration>
141            </plugin>
142
143            <plugin>
144                <groupId>org.scala-tools</groupId>
145                <artifactId>maven-scala-plugin</artifactId>
146                <version>2.15.2</version>
147                <executions>
148                    <execution>
149                        <goals>
150                            <goal>compile</goal>
151                            <goal>testCompile</goal>
152                        </goals>
153                    </execution>
154                </executions>
155            </plugin>
156        </plugins>
157    </build>
158</project>
159
160

And finally to publish: mvn clean package; mvn azure-functions:deploy

Source https://stackoverflow.com/questions/69075094

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Scala

Tutorials and Learning Resources are not available at this moment for Scala

Share this Page

share link

Get latest updates on Scala