varnish-cache | Varnish Cache source code repository | Caching library
kandi X-RAY | varnish-cache Summary
kandi X-RAY | varnish-cache Summary
Varnish Cache source code repository
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of varnish-cache
varnish-cache Key Features
varnish-cache Examples and Code Snippets
Community Discussions
Trending Discussions on varnish-cache
QUESTION
After installation of varnish & hitch on ubuntu 20.04 server, getting following error:
curl: (52) Empty reply from server
Tutorial I am following:
...ANSWER
Answered 2022-Jan-12 at 14:07Uncomment the following line in your hitch.conf
:
QUESTION
I am trying to setup a varnish cache where the varnish instance is hosted on one server and the backend is on a different server. They are both on aws lightsail instances. The issue I am having is when I try and go to the site, I get the Error 503 Backend fetch failed
error. Here is the varnish default.vcl:
ANSWER
Answered 2021-May-07 at 08:19I discovered the key information in the logs:
QUESTION
This question is regarding getting Xdebug to work with a CLI PHP script hosted inside a web-server Docker instance.
I have docker containers : web-server
, varnish-cache
, nginx-proxy
.
I am able to successfully debug a Magento 2 web-page via browser with this VS Code Launch config:
This is with the new XDebug v3 which removed alot of v2 configuration settings
Client (Windows 10) IP (my laptop) : 192.168.1.150, Host (Ubuntu 20.04) IP: 192.168.1.105, hosting with Docker containers IP: 172.100.0.2-5
VS Code launch:
...ANSWER
Answered 2021-Mar-26 at 12:49You need to set Xdebug's xdebug.client_host
to the IP address of your IDE, which you indicated is 192.168.1.150
.
You also need to turn off xdebug.discover_client_host
, as that would try to use the internal Docker network IP (172.100.0.2
), which is not where your IDE is listening on.
Remember: Xdebug makes a connection to the IDE, not the other way around.
QUESTION
Why is ExecStart=
defined twice and why is the first one empty? Is it because Varnish will start two processes, one parent and one child? If so, where can I read about it?
I can't seem to find any information about this.
In Varnish's own documentation, again and again across versions, this is the instruction:
Source: https://varnish-cache.org/docs/trunk/tutorial/putting_varnish_on_port_80.html
...ANSWER
Answered 2021-Feb-10 at 13:04According to https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart= :
Unless Type= is oneshot, exactly one command must be given. When Type=oneshot is used, zero or more commands may be specified. Commands may be specified by providing multiple command lines in the same directive, or alternatively, this directive may be specified more than once with the same effect. If the empty string is assigned to this option, the list of commands to start is reset, prior assignments of this option will have no effect. If no ExecStart= is specified, then the service must have RemainAfterExit=yes and at least one ExecStop= line set. (Services lacking both ExecStart= and ExecStop= are not valid.)
Long story short:
- All occurences of
ExecStart
are executed, unlessType=oneshot
- By setting
ExecStart=
, we're making sure that previous values are removed - By setting
ExecStart
again with an actual value, only this command will be executed
QUESTION
I'm using Varnish + nginx on a web server and I'm trying to get the real IP of users going to my site in the access.log, I was able to get it to work but for some reason my local IP (the one Varnish is running from) gets appended to the log entry as well, here's how it looks like:
...ANSWER
Answered 2020-Oct-08 at 11:58Everything is working as expected according to your log format:
log_format main '$http_x_forwarded_for - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent"';
Note how you're logging the value of X-Forwarded-For
header, as opposed to the IP.
If you want real IP, use $remote_addr
together with the directives you already tried:
QUESTION
I have the following Varnish configuration:
...ANSWER
Answered 2020-Sep-23 at 10:58It looks like you're doing all the right things, but I would advise you to do some debugging.
If you run the following command, the Hash
tag will appear in varnishlog
:
QUESTION
I have a large Docker project with Dockerfiles for nginx, apache2, varnish, redis
configured and working well after weeks of changes and testing.
I am now at a point where I setup the projects to use docker-compose and override.yml files for easy setup:
I am trying to use the same docker-compose setup for multiple projects (websites)
Normal startup (using docker-compose.yml and optional docker-compose.override.yml
)
docker-compose up -d
Custom startup (using specific docker-compose files)
docker-compose -f docker-compose.yml -f custom/docker-compose.website1.yml up -d
Both these methods starts up fine:
docker-compose ps
Ignore the fact that they are Exit 0 - I stopped them using docker-compose stop, the containers work fine
nginx-proxy /usr/bin/supervisord Exit 0
redis-cache /usr/bin/supervisord Exit 0
varnish-cache /usr/bin/supervisord Exit 0
web-server-apache2 /usr/bin/supervisord Exit 0
Now I want a second project (website) to use the same docker/docker-compose configuration setup:
docker-compose -f docker-compose.yml -f anothercustomfolder/docker-compose.website2.yml up -d
To my surprise docker-compose recreated containers and do not create a new set of containers:
See 'current setup' section for how I setup things.
...ANSWER
Answered 2020-Aug-18 at 12:44Thanks to advice from David Maze, I struggled further with configuring the docker-compose setup to work with multiple projects.
Information based on docker-compose v1.25.0 (July 2020)
This discussion is especially important when you want to re-use (persist) your containers (start/stop instead of just up/down - deleting)
As initially pointed out in my question - if you try to create containers using docker-compose up -d
there are some pitfalls which the tool simply does not support right at the moment.
PITFALLS OF CURRENT DOCKER-COMPOSE IMPLEMENTATION:
- If you just use overridden
docker-compose*.yml
with different container_names (per 'project') with files in the same folder
docker-compose up
will simply replace existing containers as explained in my question.
- You can do the following:
docker-compose -p CUSTOM_PROJECT_NAME -f file1.yml -f file2.yml up -d
, but:
This on its own is useless - these containers will only work until you want stop them. As soon as you want to do
docker-compose start
(to restart existing container set) it will simply fail withError: No containers to start
- If you use two different folders with the same docker-compose project (ie cloned project): for instance
./dc-project1
./dc-project2
but usingcontainer_name
field insidedocker-compose.*.yml file
:
When you try to run
docker-compose -f f1.yml -f f2.yml up -d
inside./dc-project1
and the same inside./dc-project2
folder, you will get the following error:You have to remove (or rename) that container to be able to reuse that name
.
- Similar issues with your Docker network will occur with docker-compose when you use overridden files: Removed most of the custom settings to make the network setting clearer:
Network will be attached correctly from your overridden file on
docker-compose up
, but as soon as you want todocker-compose start
it looks for your default network name: in the defaultdocker-compose.yml
or even thedocker-compose.override.yml
file if it exists. In other words - it ignores your custom docker-compose override files (see example below):
docker-compose.yml
:
QUESTION
I have setup a docker-compose project which are creating multiple images:
...ANSWER
Answered 2020-Aug-14 at 13:47In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build
and docker run
commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build:
block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name:
or environment:
variables or volumes:
because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
QUESTION
I am setting up a Magento server with Nginx SSL termination and varnish nginx and varnish 5.1 are installed in dedicated host 192.168.1.251 (ubuntu) & Magento on 192.168.1.250 (ubuntu)
nginx 1.6 (192.168.1.251:443 or https://mywebsite.com/) + varnish (127.0.0.1:6081) -> magento 2.3 (192.168.1.250:8080)
problem is content like jpg,svg etc are served from 192.168.1.250 i.e my backend server directly and scripts are blocked due to CORS ref below image chrome DevTools
if I access 192.168.1.251:6081 i.e varnish host & port all the contents are coming from the backend server Chrome Devtool
nginx ssl termination config:
...ANSWER
Answered 2020-Jul-30 at 07:15The issue you're experiencing is probably related to the fact that your Magento base URL is set to 192.168.1.250:8080
.
Magento will enforce that value if it notices the Host
header (or the protocol scheme) doesn't match its own.
So in your case, you're sending the following host header to Magento through Varnish:
QUESTION
Please help, we are trying add user and roles to our legacy application by mapping users in Apache AuthgroupFile with varnish-cache reverse-proxy, any user authenticated through Apache Basic Auth should be able to go through; The user is mapped to the role in the AuthgroupFile and in back-end we check for the group name and assign the role in the application
can we read the AuthgroupFile to a variable and in varnish-cache and check for the REMOTE_USER header?
#AuthgroupFile admin: foo boo roo readonly: goo too zoo
#varnish-cache rule
...ANSWER
Answered 2020-Jul-27 at 13:41If you want to check for authenticated users, I'd advise you to have a look at vmod_basicauth.
Its a Varnish module that reads an .htpasswd
file and gives you a VCL API to interact with these logins.
Here's how to use this module in VCL:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install varnish-cache
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page