docker-mail | Docker container running dovecot and postfix | Continuous Deployment library

 by   invokr Shell Version: Current License: MIT

kandi X-RAY | docker-mail Summary

kandi X-RAY | docker-mail Summary

docker-mail is a Shell library typically used in Devops, Continuous Deployment, Docker applications. docker-mail has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

This container aims to provide a secure and portable mail environment based on Postfix and Dovecot. SSL is enabled per default and new TLS keys are generated when starting the container, these should be replaced with your own keys if possible. Dovecot is only listening via SSL on port 993. Postfix is configured to use opportunistic encryption as to not bounce mails from non-tls clients. In addition to common spam lists, opendmarc is used to authenticate messages when available. Mozillas public suffix list is updated once per week via cron. This is not a prime example of how you should build a docker container, but I’m to lazy to pull all the configurations apart so that each service is running in it’s own container. CentOS is used as the base image instead of alpine so I can be sure postfix / dovecot stay on their respective versions.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              docker-mail has a low active ecosystem.
              It has 8 star(s) with 2 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 1 have been closed. On average issues are closed in 18 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of docker-mail is current.

            kandi-Quality Quality

              docker-mail has no bugs reported.

            kandi-Security Security

              docker-mail has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              docker-mail is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              docker-mail releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of docker-mail
            Get all kandi verified functions for this library.

            docker-mail Key Features

            No Key Features are available at this moment for docker-mail.

            docker-mail Examples and Code Snippets

            No Code Snippets are available at this moment for docker-mail.

            Community Discussions

            QUESTION

            how does docker map volume names in docker compose file to volumes on system
            Asked 2020-Jul-26 at 08:40

            I am migrating a tvial docker mail server from one system to another. I set this up some time ago, and vaguely remember the steps, but not every detail. I copied my mail data and mail state volumes to the new system, but when I went to run docker on the new system I was confused. The old system shows this in docker compose:

            ...

            ANSWER

            Answered 2020-Jul-26 at 08:40

            The resources are prefixed with the project(deployment) name.
            This can be specified using the -p flag when you run the docker-compose up command. If not specified, the project name defaults to the name of the directory to whom your docker-compose.yaml file belongs to.

            Official documentation here: https://docs.docker.com/compose/reference/envvars/#compose_project_name . Relevant extract for the question:

            COMPOSE_PROJECT_NAME:
            Sets the project name. This value is prepended along with the service name to the container on start up. For example, if your project name is myapp and it includes two services db and web, then Compose starts containers named myapp_db_1 and myapp_web_1 respectively.

            Source https://stackoverflow.com/questions/63093055

            QUESTION

            Which ports are used for which purpose specifically in email communication?
            Asked 2020-May-03 at 19:51

            I understand that there are several options to choose from when using an e-mail server. E.g. 25 and 587 for opt-in encryption and 465 for enforced encryption.

            docker-mailserver, a popular docker mailserver container describes ports 587, 465, 143 and 993 specifically as submission and retrieval ports. The actual server to server communication is apparently established using port 25 on both sides. Is this a common implementation?

            My understanding until now was, that the actual communication (for outgoing emails) is done over port 587 or 465.

            Encouraged by the exposed-port explanation of the above mentioned container I now figured that the whole retrival, submission and transfer process works (extremely simplified) like this:

            Use port 25,465 or 587 to send email from client to transmitting mailserver.

            The transmitting mailserver sends the email over port 25 to the recipients mailserver.

            The recipient then receives the email on port 143 or 993 from his/her mailserver (assumed IMAP/s is used) and shows it accordingly in his/her mail client.

            Is this correct? If so, is it even possible to send emails from a mailserver whose ISP blocked port 25 that users of common mail services like GMail, Yahoo etc. can receive?

            ...

            ANSWER

            Answered 2020-May-03 at 19:51

            This is more of a network administration question, than a programming question, so may be considered off topic. That being said:

            The SMTP protocol is used for two different, but similar purposes: Message Submission, and Message Transission.

            Message Submission is done by an MSA, Message Submission Agent, generally on behalf of an end user, but perhaps on perhaps of a script or process. Traditionally, these are clients like Thunderbird, Apple Mail, or the email client on your phone. In modern practice, this is generally done authenticated (with user credentials) and encrypted on ports 465 or 587.

            • Port 465, SMTPS (smtp-secure, by analogy with https) is technically deprecated, but widely used. It is used for SMTP over TLS, where the encryption is encrypted from connection until termination.
            • Port 587, submission is generally used with STARTTLS, where the connection is first made unencrypted, but upgraded shortly thereafter using a special command.

            Both these ports generally accept mail from a user with credentials, for any destination, and will hold and relay these for the user. For example, if you connect to smtp.gmail.com on port 465 or 587, and authenticate as user@gmail.com, it will allow you to submit email for anyone, as long as it is from user@gmail.com.

            Message Transmission is done by an MTA, Message Transmission Agent, generally on behalf of all the users of a site or service. Relaying is done between sites on port 25, with opt-in STARTTLS encryption. Authentication is not generally done, but there is a complicated system of reputation tracking, firewalls, and blacklists generally used behind the scenes. Usually only mail for a specific site is accepted on this port. For example, if you connect to one of gmail.com's MX servers (for example, gmail-smtp-in.l.google.com as of this writing) on port 25, and it thinks you are a trustworthy IP, it will accept mail from anyone to any gmail address (subject to further scanning). It will refuse to relay to anyone offsite.

            Message Retrieval is generally done by IMAP on ports 143 (with STARTTLS) or 993 (with TLS from connection). This is a pull service used by an end-user (generally) to retrieve emails being held by an MTA on their behalf. POP3 is also used (on 110 and 995) by some sites, but it is a much less capable protocol.

            Traditionally, submission and transmission were both done on port 25 without authentication, but that's a no go on the modern internet. It was split into transmission and submission so network resources could be better controlled. As you may have discovered, many ISPs and cloud services restrict port 25 so end-users cannot act as transmitters without their consent, and so relaying happens either through their servers or some other service that will take responsibility.

            This, through this model, gmail users can generally only submit via gmail's submission server, and other users must submit through their services server, and spammers can't just set up a server anywhere to transmit messages to gmail. If they do and their ISP hasn't firewalled it, their reputation will shortly be trashed and be placed on many blacklists.

            Additionally, a lot of this doesn't even happen over the traditional protocols anymore. If you use Google services and clients, you will likely be using a custom protocol tunneled over HTTPS, or the public GMAIL REST protocol. If you're using Microsoft, they have no less than 3 email protocols: Exchange ActiveSync, Exchange Web Services, and Microsoft Graph/Outlook MAIL Rest API, all using HTTPS.

            Source https://stackoverflow.com/questions/61579686

            QUESTION

            docker compose rails 6 example with mailcatcher doesn´t work
            Asked 2020-Mar-21 at 18:41

            I have this file "docker-compose.yml"

            ...

            ANSWER

            Answered 2020-Mar-20 at 14:10

            This is because you can't access service through 127.0.0.1. If your services on the same network you will able to access it by the links directive you passed, like change your address from 127.0.0.1 to mailcatcher. If your services in different networks - you can open ports on service that you need to call and then access it by machine local ip address (not the 127.0.0.1 but by the your physical/virtual machine ip address in local network)

            Source https://stackoverflow.com/questions/60775490

            QUESTION

            Nodemailer connect to local docker-mailserver
            Asked 2018-Nov-07 at 10:48

            I need to set up a local mail server and send emails from it. I use docker-mailserver and try to connect to it from simple NodeJS script that uses Nodemailer. For docker-mailserver setup I followed the guide from its docs and changed only the DOMAINNAME env var to domain name of my server. The resulting docker port for container is:

            ...

            ANSWER

            Answered 2018-Nov-07 at 10:48

            By default container are isolated in docker. You can allow connection between containers in docker by adding link argument when creating your Container USAGE :

            Source https://stackoverflow.com/questions/53181854

            QUESTION

            RainLoop+tomav/docker-mailserver: Cannot connect to server from RainLoop Webmail client
            Asked 2018-Aug-29 at 13:11

            To start off, I followed this guide to the letter: https://www.davd.eu/byecloud-building-a-mailserver-with-modern-webmail/

            I am attempting to create a mailserver for my server, but I thought I'd test the above implementation locally first. Make sure I can get everything up and running at least so I can see what I should be expecting before trying it on the server. Here's what I did:

            1. Added "127.0.0.1 mail.fancydomain.tld" to "/etc/hosts" (I wanted to start by using mail.fancydomain.tld rather than my actual domain that the mailserver will be on to minimize any changing while following the guide)
            2. I created this "docker-compose.yml":

              ...

            ANSWER

            Answered 2018-Jul-10 at 19:41

            The solution for this one it to make sure both containers, RainLoop and Mail, share a bridged network. Then all the configurations can stay the same.

            Source https://stackoverflow.com/questions/51112710

            QUESTION

            Where should I put shared services for multiple kubernetes-clusters?
            Asked 2018-May-24 at 10:16

            Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing). We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.

            So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:

            • Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
            • Logging: We didn't decide yet.
            • Mailserver (https://github.com/tomav/docker-mailserver)
            • independant services: Sentry, Gitlab

            I am not sure where to run these external shared services. I found these options:

            1. Inside each cluster

            We need to install the tools 3 times for the 3 environments.

            Con:

            • We don't have one central point to analyze our systems.
            • If the whole cluster is down, we cannot look at anything.
            • Installing the same tools multiple times does not feel right.
            2. Create an additional cluster

            We install the shared tools in an additional kubernetes-cluster.

            Con:

            • Cost for an additional cluster
            • It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
            3) Use an additional root-server

            We run docker-containers on an oldschool-root-server.

            Con:

            • Feels contradictory to use root-server instead of cutting-edge-k8s.
            • Single point of failure.
            • We need to control the docker-containers manually (or attach the machine to rancher).

            I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic? Or is it just no relevant problem that a cluster might go down?

            To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.

            The important questions are:

            • Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
            • Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
            • Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?

            Thanks for your hints,

            Marius

            ...

            ANSWER

            Answered 2018-May-24 at 10:16

            That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.

            I think the best way is the second one - Create an additional cluster, and that's why:

            1. You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.

            2. Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.

            3. With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.

            4. Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.

            Now, about your questions:

            Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?

            No, it is not a problem with a separate cluster.

            Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?

            I think, yes. At least I did it several times, and I know some other projects with similar architecture.

            Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?

            Yes, nothing complex there. Usually, it does not depend on the platform.

            Source https://stackoverflow.com/questions/50488149

            QUESTION

            Multiple docker container the same ports
            Asked 2018-Mar-10 at 01:27

            I got the following containers:

            ...

            ANSWER

            Answered 2018-Mar-10 at 01:27

            Try use networks:

            (At the moment I have no where to try it, so I hope it works, or at least help you decipher the dilemma)

            Source https://stackoverflow.com/questions/49204049

            QUESTION

            Postfix attempting to bind to port 25, used for outgoing mail only (maybe not necessary?)
            Asked 2017-Aug-08 at 14:41

            Not sure if this belongs on Stack Overflow or somewhere else but I'll try here first.

            I have multiple servers, each with the same setup where nearly everything running on the server is in a docker container. I have two goals I would like to achieve. First, the host machine is setup to send emails for users with uid < 1000 to my external email address. Second, on one server, I have a docker-mailserver container running to handle random, seldom used emails (for log files, etc.).

            It seems I can have either the host machine running postfix OR the docker-mailserver running (and bound to port 25). Currently, I have the docker container, running the mail server, full operational and everything can send and receive just fine.

            However, now I am unable to start postfix on the host machine so that I can receive emails sent to the root user (things like cron output) since port 25 is --rightfully-- in use by the actual mail server receiving email.

            Questions: 1) How can I tell postfix on the host to not bind to port 25? If port 25 is only used for receiving mail, why would my outgoing-only postfix config need to use port 25? 2) I am perfectly comfortable not receiving emails for the root user, if whatever would normally be sent to the root user is logged elsewhere (perhaps, syslog?). Are the emails to root only maintained as emails or are they somewhere else, negating the need for postfix on the host for forwarding to a real account?

            Thanks in advance.

            ...

            ANSWER

            Answered 2017-Aug-08 at 14:41

            Specifically answering your questions first:

            1. You should be able to have postfix listen on any port you specify by editing the main.cf configuration file and changing the smtp listener to a numbered port of your choice. Of course, if it isn't a "known" port, I'm not sure what/who will ever connect to it, but maybe you don't care in this situation as you are only using postfix as a relay?

            2. It may depend some on the Linux distribution or setup of your host, but most systems will leave email in the local delivery "mail spool" if there is no system/daemon set up to move it anywhere else. Back when that was the normal way to handle multi-user mail on UNIX systems, a login user used a mail reader client to read through email in your local "spool", and of course if you don't have that, you can simply vi your mail file and read the raw contents if necessary. These mail files are normally located in /var/spool/mail on most systems.

            Stepping away from your questions, I would guess you don't necessarily need postfix running on your host, especially as your containerized mailserver is handling the port 25 SMTP traffic for the host. Local email will stay local, I assume, without postfix, and be available through local means; and you might even find a simpler solution to external forwarding (e.g. a script that can parse mail spools and just connect to an SMTP relay and send it to an external address) if you want that.

            Source https://stackoverflow.com/questions/45557575

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install docker-mail

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/invokr/docker-mail.git

          • CLI

            gh repo clone invokr/docker-mail

          • sshUrl

            git@github.com:invokr/docker-mail.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link