[lug] A few docker questions...

Bear Giles bgiles at coyotesong.com
Sun Jan 26 10:28:57 MST 2020


I think you're referring to a user-defined bridge network. Those are
restricted to the host - I want to have docker images (with independent IP
addresses) on multiple hosts connected.

I found a partial solution. I knew that I could specify

    ports:
        - "10.10.10.10:53:53"
        - "10.10.10.10:443:443"
        - "10.10.10.10:80:80"
but it was flaky. I finally had a d'oh moment and realized that the problem
was because many services (and docker) will bind to 0.0.0.0 by default so I
get a port conflict. Hence flakiness. Once I was careful to always specify
the desired ip address everything worked reliably.

This requires setting up an extra ip address. For netplan that's

network:
   version: 2
   ethernets:
       enp0s31f6:
           dhcp4: true
           addresses: [ 10.10.10.10/24, 10.10.20.10/24, 10.10.30.10/24 ]


and for NetworkManager it's

[Match]
Name=enp3s0

[Network]
DHCP=no
Gateway=192.168.1.1
DNS=10.0.10.11
Domains=lan
Address=192.168.1.11/24
Address=10.10.10.11/24
Address=10.10.20.11/24
Address=10.10.30.11/24

(where the first uses DHCP and the second doesn't). (Oddly netplan doesn't
always seem to be respected AND i've seen some unexpected results with
'resolvectl'. I'm still missing something.)

This is good enough that I can run the same service in multiple containers
but I can't play with advanced networking.

Hmm... actually I just realized that I can still create the bridges and
virtual networks via netplan/NetworkManager and then assign the docker
images to those addresses. It's not as convenient as using Docker
networking - the onus is on me to set up the bridges/virtual networks,
assign IP addresses, etc., but I can still play with advanced networking.
The host will still be in a privileged position and see all containers but
other systems will only see the containers via the bridge etc.

For my purposes that may actually be preferable since one of the goals is
to learn mid-level linux networking and then the AWS VPC equivalences. It
won't easily translate to work but for the most part we won't need it in
the dev environment since we don't require the same level of security as
production. E.g., we might need a 6-node hadoop cluster to test HDFS HA but
we don't have to put everything on a private network with external access
limited to an edge node. It would be nice since it could catch corner cases
where we assumed we'll always have access but we don't need it by default.
However it would be nice to be able to create this configuration if the
need arose.

Bear


On Sat, Jan 25, 2020 at 4:28 PM Matt James <matuse at gmail.com> wrote:

> I'm still learning this stuff so my answer may not be 100% accurate but
> from what I can tell, you might be after something like this:
>
> version: "3.5"
> services:
>   sh:
>     image: ubuntu
>     container_name: sh
>     ports:
>         - "53:53"
>         - "8443:443"
>         - "8080:80"
>     networks:
>         - somenetname
> networks:
>   somenetname:
>
> then just 'docker-compose up -d sh' and you should be set.
>
> We use a variation of that in our test environment and I run ~6 docker
> containers from one ubuntu host this way.
>
> HTH
>
> Matt
>
>
> On Sat, Jan 25, 2020 at 2:54 PM Bear Giles <bgiles at coyotesong.com> wrote:
>
>> No, knowing that smart people find it difficult to use and unreliable is
>> helpful. It bumps the "what do I really need?" factor up since this is for
>> a development environment and we have more flexibility than we would in a
>> production environment. I still need to bang on it enough to defend taking
>> a step back but it's a lot easier to cut it without an exhaustive effort.
>>
>> For instance with DNS it would be nice to have the docker image visible
>> to the local network but it's not unreasonable to use the host's DNS server
>> (dnsmasq) for that - the host can see the docker container and use it as an
>> upstream source. As you pointed out I can use nginx to forward the
>> http/https traffic.
>>
>> On Sat, Jan 25, 2020 at 10:30 AM Rob Nagler <nagler at bivio.biz> wrote:
>>
>>> Sounds complicated.
>>>
>>> We use --network=host almost exclusively. A lot of things don't work
>>> with overlay networks. In our case, MPI. Every time I talk to people about
>>> how they managing MPI (or other parallelization tools) they always say "we
>>> use host networking". This may be an aside for you, but we've found that by
>>> using host networking, things just work, and the Docker daemon is
>>> restartable.
>>>
>>> I have never gotten "classic" or "modern" Docker swarm to work reliably.
>>> It sets up fine, but here are bugs, especially with overlay networks. Those
>>> bugs get fixed slowly afaict, and with k8s taking over orchestration, I
>>> wouldn't bet on swarm having a long life. I don't like k8s, just saying
>>> that the vast majority of people who do orchestration, do.
>>>
>>> We orchestrate ourselves. I find Docker compose and k8s to be difficult
>>> to understand. Rather than fighting the tool, we use systemd to start most
>>> containers with docker run, which is easily testable from a shell. I also
>>> find tools like docker.py to be a disaster, because they are almost always
>>> behind the option curve of the Go client. Also when you use these wrappers,
>>> they are harder to test. The Go client has a clear and well-documented
>>> interface that is easy to prototype with.
>>>
>>> As far as 0.0.0.0 goes, that seems irrelevant to host networking. We use
>>> nginx (native, not dockerized) to proxy 443 and 80 to containers. Nginx is
>>> reliable, easy to configure, and handles TLS termination.
>>>
>>> Perhaps not the answer you were looking for, sorry.
>>>
>>> Rob
>>>
>>> _______________________________________________
>>> Web Page:  http://lug.boulder.co.us
>>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
>>> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>>
>> _______________________________________________
>> Web Page:  http://lug.boulder.co.us
>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
>> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20200126/6988f538/attachment.html>


More information about the LUG mailing list