docker-spark | Apache Spark docker container image | Continuous Deployment library
kandi X-RAY | docker-spark Summary
kandi X-RAY | docker-spark Summary
Apache Spark docker container image (Standalone mode)
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of docker-spark
docker-spark Key Features
docker-spark Examples and Code Snippets
Community Discussions
Trending Discussions on docker-spark
QUESTION
I am following this link to create a spark cluster. I am able to run the spark cluster. However, I have to give an absolute path to start spark-shell
. I am trying to set environment variables i.e. PATH
and a few others in start-shell.sh
. However, it's not setting that inside container. I tried printing it using printenv
inside the container. But these variables are never reflected.
Am I trying to set environment variables incorrectly? Spark cluster is running successfully though.
I am using docker-compose.yml to build and recreate an image and container.
Dockerfile ...docker-compose up --build
ANSWER
Answered 2021-Aug-16 at 14:09There are a couple of different ways to set environment variables in Docker, and a couple of different ways to run processes. A container normally runs one process, which is controlled by the image's ENTRYPOINT
and CMD
settings. If you docker exec
a second process in the container, that does not run as a child process of the main process, and will not see environment variables that are set by that main process.
In the setup you show here, the start-spark.sh
script is the main container process (it is the image's CMD
). If you docker exec your-container printenv
, it will see things set in the Dockerfile
but not things set in this script.
Things like filesystem paths will generally be fixed every time you run the container, no matter what command you're running there, so you can specify these in the Dockerfile
QUESTION
I just downloaded this docker image to set up a spark cluster with two worker nodes. Cluster is up and running however I want to submit my scala file to this cluster. I am not able to start spark-shell
in this.
When I was using another docker image, I was able to start it using spark-shell
.
Can someone please explain if I need to install scala separately in the image or there is a different way to start
UPDATE
Here is the error bash: spark-shell: command not found
...ANSWER
Answered 2021-Aug-16 at 08:53You're getting command not found
because PATH
isn't correctly established
Use the absolute path /opt/spark/bin/spark-shell
Also, I'd suggest packaging your Scala project as an uber jar to submit unless you have no external dependencies or like to add --packages
/--jars
manually
QUESTION
So I have a Spark cluster running in Docker using Docker Compose. I'm using docker-spark images.
Then i add 2 more containers, 1 is behave as server (plain python) and 1 as client (spark streaming app). They both run on the same network.
For server (plain python) i have something like
...ANSWER
Answered 2020-Dec-20 at 16:17Okay so i found that i can use the IP of the container, as long as all my containers are on the same network. So i check the IP by running
QUESTION
I am looking at this image and it seems the layers are redundant and these redundant layers ended up in the image ? If they are , how they ended up in the image leading to large amount of space ? How could i strip these layers ?
https://microbadger.com/images/openkbs/docker-spark-bde2020-zeppelin:latest
...ANSWER
Answered 2020-Sep-03 at 23:39What you are seeing are not layers, but images that were pushed to the same registry. Basically, those are the different versions of one image.
In a repository, each image is accessible through an unique ID, its SHA value. Furthermore, one can tag images with convenient names, e.g. V1.0
or latest
. These tags are not fixed, however. When an image is pushed with a tag that is already assigned to another image, the old image loses the tag and the new image gains it. Thus, a tag can move from one image to another. The tag latest
is no exception. It has, however, one special property: the tag is always assigned to the most recently pushed version of an image.
The person/owner of the registry has pushed new versions of the image and not tagged the old versions. Thus, all old versions show up as "untagged".
If we pull a specific image, we will receive this image and this image only, not the complete registry.
QUESTION
I downloaded two images and the sizes are as follows :
...ANSWER
Answered 2020-Aug-30 at 13:04Yes, same layers are "shared". Docker using hashes (including filesystem and commands) to identify these layers.
So docker shows you the size of the images (including the base-images) but that doesn't mean that they needs the same disk space.
QUESTION
I am using this setup (https://github.com/mvillarrealb/docker-spark-cluster.git) to established a Spark Cluster but none of the IPs mentioned there like 10.5.0.2
area accessible via browser and giving timeout. I am unable to figure out what's wrong am I doing?
I am using Docker 2.3 on macOS Catalina.
In the spark-base
Dockerfile I am using the following settings instead of one given there:
ANSWER
Answered 2020-Jul-22 at 16:45The Dockerfile tells the container what port to expose.
The compose-file tells the host which ports to expose and to which ports should be the traffic forwarded inside the container.
If the source port is not specified, a random port should be generated. This statement helps in this scenario because you have multiple workers and you cannot specify a unique source port for all of them - this would result in a conflict.
QUESTION
I'm learning spark I'd like to use an avro data file as avro is external to spark. I've downloaded the jar. But my problem is how to copy it into that specific place 'jars dir' into my container? I've read relative post here but I do not understand.
I've see also this command below from spark main website but I think I need the jar file copied before running it.
...ANSWER
Answered 2020-Apr-17 at 23:28Quoting docker cp
Documentation,
docker cp SRC_PATH CONTAINER:DEST_PATH
If
SRC_PATH
specifies a file andDEST_PATH
does not exist then the file is saved to a file created atDEST_PATH
From the command you tried,
The destination path /jars
does not exist in the container since the actual destination should have been /usr/spark-2.4.1/jars/
. Thus the jar was copied to the container with the name jars
under the root (/
) directory.
Try this command instead to add the jar to spark jars,
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install docker-spark
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page