整理台灣所有自費接種疫苗的資料 Collecting all the information about self-paid vaccines in Taiwan.
Make sure you have pipenv installed.
- git clone the repository
- run
pipenv install
. - run
yarn
- In one terminal, run
yarn backend
. This starts the Python Flask server. - In another terminal, run
yarn frontend
. This starts Parcel, our JS bundler.
To make sure we're serving all of Taiwan it's imperative we translate to all the languages spoken here. I haven't settled on an i18n approach I'm happy with yet, but for now, to localize into a new language:
- Translate the blurb in Content.jsx. This is the main content that shows on the site when arriving.
- Translate the strings that live in the Strings folder.
- Translate the content for the top G0v banner.
To add support for a locale once all strings have been localized, override the setLocale('en')
method call in Content.jsx and set it to whatever language code you used to localize.
Current priority languages:
- Tagalog
- Japanese
- Korean
Some principles this project operates under:
- Dependencies should only be introduced where only necessary. The only project non-dev dependency in JS so far is React. This keeps the project easy-to-maintain.
- Configuration is minimal. We're using all the defaults for ParcelJS, a zero-configuration JavaScript bundler.
This project uses:
- AirBnB JavaScript
- Flow for JavaScript typechecking
- Black for Python formatting
- Pyre for Python type-checking with PEP484 Typehints.
This is an app in three parts:
- A web app written in React. The code for this lives in
frontend/
. - A Local Scraper running on a machine in my living room. This is a python script that lives in
backend/local_scraper.py
. It is executed by-the-minute by a cronjob. It takes the results and uploads them to a Redis server. - A Flask server written in Python. This server code lives in
backend/app.py
. This code runs on a DigitalOcean Droplet in Singapore. This droplet serves down the React app by reading the latest information from a Redis server and sending the data down.
Note: Why separate the scraper from the droplet? Originally, I wanted to run everything on the droplet, but websites were blocking non-Taiwan IP addresses, so I was forced to find a scrape from Taiwan, which means a machine running on my family's network.
When you are developing locally, instead of reading from the Redis server, it makes sense to scrape the data directly. Tthe app.py
Flask server takes a --scrape
flag, which when set will scrape locally instead of reading from the Redis server.
- Open up data/hospitals.csv. Add your GitHub name as the owner next to that hospital.
- Note the ID for that hospital (e.g. 台大醫院 is 3.)
- Write a scraper that makes a request to the webpage returns the ID and the
AppointmentAvailability
as a tuple (see examples) - Import your module in app.py and add your parser in the list of PARSERS.
- Run
yarn backend
and confirm that your code is correct - Make a Pull Request and tag @kevinjcliao to take a look.
See here for an example of a Pull Request.
- After adding a new Python dependency, pipenv gets pretty unhappy. Run
pipenv lock --pre --clear
to fix. I've aliased this toyarn fixpipenv
.
We're chatting in the #vaccine channel of the g0v Slack. Come say hi! :)
g0v is Taiwan's polycentric civic tech community. We're a network of volunteers who build websites that serve the public good. Join us for our next Hackathon! You can find dates on the Jothon Team website.
If you were able to book a vaccine using this website, I'd love it if you'd consider throwing a couple bucks towards mutual aid or solidarity organizations in your area. Some ideas:
- Kevin Liao
- Kevin Liao and Yao Wei
- Bahasa: Dakota Pekerti and Andre Maure
- Mandarin: Kevin Liao