balena-io-experimental/balena-wifi-connect-example

Do not include python packages from the distro

Opened this issue · 2 comments

Dockerfile does not work anymore when installing packages from the distro. Install them with pip instead.

Hello, thanks for your help over on the resin forums with this!

I got this dockerfile to build, but I still cannot get this project working:


# Base-image for python on any machine using a template variable
FROM resin/%%RESIN_MACHINE_NAME%%-debian

# Set the maintainer
LABEL maintainer="Joe Roberts <joe@resin.io>, Zahari Petkov <zahari@resin.io>"

# Enable systemd init system
ENV INITSYSTEM on

# Set the working directory
WORKDIR /usr/src/app

# We have split up the resin-wifi-connect and Display-O-Tron HAT configuration to make clear
# the different parts needed. In your dockerfile you should combine these steps to reduce
# the number of layers.

# -- Start of resin-wifi-connect section -- #

# Set the device type environment variable using Dockerfile templates
ENV DEVICE_TYPE=%%RESIN_MACHINE_NAME%%

# Use apt-get to install dependencies
RUN apt-get update && apt-get install -yq --no-install-recommends \
    dnsmasq && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

# Install resin-wifi-connect
RUN curl https://api.github.com/repos/resin-io/resin-wifi-connect/releases/latest -s \
    | grep -hoP 'browser_download_url": "\K.*%%RESIN_ARCH%%\.tar\.gz' \
    | xargs -n1 curl -Ls \
    | tar -xvz -C /usr/src/app/

# -- End of resin-wifi-connect section -- #

# -- Start of Display-O-Tron HAT section -- #

# Use apt-get to install dependencies
RUN apt-get update && apt-get install -yq --no-install-recommends \
    build-essential \
    python \
    python-pip \
    python-dev \
    python-dbus \
    python-setuptools \
    python3 \
    python3-pip \
    python3-dev \
    python-smbus \
    python-psutil \
    wireless-tools && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

# Upgrade pip
RUN pip install -U pip

# Install dot3k library
RUN pip install dot3k

# -- End of Display-O-Tron HAT section -- #

# Copy everything into the container
COPY . ./

# Start application
CMD ["bash", "start.sh"]

On a slightly broader note, because of the nature of this project (ie, wifi connectivity back to resin.io doesn't happen unless the project is working well), I'm finding there is a really long debug cycle:

  • I build an image on my laptop
  • wait for it to upload to resin.io
  • download the (same?) image and use Etcher to burn the SD card
  • plug it into the RasPi Zero and power it up.
  • If there's an error anywhere, I end up not seeing an SSID called WiFi Connect, but I can't really connect to my Pi and get debug info, so I basically have to start that process again.

Since I'm a total beginner at most of this, it seems like maybe the process I described above isn't as short as it could be...I'd really appreciate if you pointed out any mistakes/improvements!

Thanks,

AKA

Do you have a chance to get a RPi 3 that you can attach to your router with an Ethernet cable? (Not the very recent RPi 3 B+, but model B. This one: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ The B+ model just came a few days ago and we still do not support it. This is much easier for development than Zero. You may stick with Zero if you do not have a chance to get a RPi 3 B though - the only difference is that you will have to reflash the sd card with Etcher each time something goes bad with the wireless connection.

I think for a shorter development cycle you can use our development images instead of production ones: https://docs.resin.io/learn/develop/local-mode/

Another feature that you may use to speed the downloads is Delta updates: https://docs.resin.io/learn/deploy/delta/

The default workflow you explain is a bit different actually:

  1. you download once the host OS image from the resin dashboard.
  2. you burn it with Etcher and put it in the device
  3. when you do git push resin master, this triggers our build system which is working remotely. There is not actual upload from your computer to our servers (only the modified source files which are pushed with git).
  4. The build server will create Docker container images - those are different than the host OS images that are burnt with Etcher. The Docker container images are downloaded automatically from the host OS on each git push. This is the download progress bar that you see on the dashboard.

So effectively you should download only once the host OS image from our website. It is the same no matter if you do new git pushes - since the git push affects only the container image.

BTW I ordered this Display-O-Tron HAT a couple of days ago and I am waiting for it - I hope to receive it next week. This is the reason I have not fixed this example yet, because I am still waiting on the hardware to arrive.

Please do not hesitate ask if you have more questions.