This code scrapes the data from NJ.com
on a daily basis (2 PM EST).
The data is parsed to extract the City, County, Cases of
COVID-19, as well as any reported deaths and recoveries.
In my opinion the data set is
most likely not as robust and accurate as would be ideal. For example, counties have stopped
reporting city level data since this project was created, while other counties have never
provided a break down.
The current day's data is below. Older files are in the Archive.
These files can be used as a constant reference to provide data to a Google Sheet with
=IMPORTDATA("https://athenedyne-covid-19.s3.amazonaws.com/Active/current-complete.csv").
There appears to be no equivalent Excel function.
The main dashboard compares four different layouts and has multiple views accessible via the tabs:
- A heat map
- A county summary map
- A ZIP map where ZIPs all have the same number of cases if a town is shared. E.g., Newark has 22 ZIP codes
and each ZIP code has the same number of cases. This better illustrates large cities as hot spots but overcounts
the total number of cases and hides other hotspots.
- A ZIP map where ZIPs evenly share number of cases if a town is shared. E.g., Newark has 22 ZIP codes
and each ZIP code has the total reported number of cases for Newark divided by 22.
This better illustrates smaller city hot spots (Lakewood, North Bergen, Passiac, Woodbridge) but does not overcount
the total number of cases.
The extracted data is then formatted and joined with a ZIP list using City and County to
handle cities that share a name (i.e., Franklin is a town in Sussex, Gloucester, Warren,
Hunterdon and Somerset). The data is also aggregated by ZIP to deal with cities that share
ZIP Codes. For example, 08053 zones for Marlton (recommended) and Evesham (recognized).
It also zones for
- EVESBORO NJ
- EVESHAM TWP NJ
- KRESSON NJ
- MARLTON LAKES NJ
- NORTH MARLTON NJ
- PINE GROVE NJ
To explore this further take a look at the
USPS,
ZIP Code lookup tool.
For cities that have more than one ZIP like Newark, Camden, Edison, etc. it appears the
default behavior is to join the cases, deaths, and recoveries data to each of the ZIPs.
This is a gotcha for aggregation. The ZIP version of the output drops duplicate (City,
County) tuples and keeps the first. The values are then safely summed to provide a total
per ZIP. Additionally the complete CSVs use the number of ZIPs per (City, County) tuple
and the provide the Adjusted Cases by dividing Cases by Shared ZIPs.
The data is then exported as three variants to an
AWS S3 Bucket
and they are sorted by Folder to keep each type together and sorted by date.
The variants are:
- MM-DD-YYYY-complete.csv has County, City, Cases, Deaths, Recoveries, Zip Code, Shared
ZIPs, Adjusted Cases. These are in the Complete folder.
- MM-DD-YYYY-cases.csv has Zip Code, City, Cases. These are in the Cases folder.
- MM-DD-YYYY-zips.csv has Zip Code, Cases aggregate. These are in the ZIPs folder.
- MM-DD-YYYY-missing-ZIPs.csv has the empty Zip Code column, City, County. This is
only written if any ZIPs are missing from the master NJzips.csv file. These are in the
MissingZIPs folder.
Additionally, the code in the .py creates copies of the
above variants starting with current. This allows bookmarking to the current file or using
the data for a visualization analysis, or other use. These are together in the Active folder.
Using Google Sheets and the import data function
(=IMPORTDATA("https://athenedyne-covid-19.s3.amazonaws.com/Active/current-complete.csv")), one
can connect to Tableau to create the interactive viz above that stays up to date:
The Lambda script is set up on AWS Lambda to run at 2 PM EST daily, updating the
S3 Bucket. It was originally scheduled for Noon but the article is not necessarily ready
in time.
The ZIPs were collected into NJzips.csv which is somewhat complete, as it may be missing
ZIPs for towns that share ZIPs. It does contain ZIPs for towns on the page as of 04/25/2020.
The basis of the index.html file (the one you're reading now) is from an answer on the
AWS forum by J. Patel on
5/3/2011. The pages uses JS to generate HTML text to list all of the files in the S3 bucket.
I modified it to skip index.html.
This repo can be used locally
.ipynb recommended) or as an AWS
Lambda (when zipped with dependencies). When used as a Lambda it's helpful to know that
AWS is running a Linux variant so the pandas and numpy libraries will need to be for a
Linux system. I'm running MacOS X so I needed to source my libraries from PyPi:
- pandas - pandas-1.0.3-cp37-cp37m-manylinux1_x86_64.whl
- numpy - numpy-1.18.2-cp37-cp37m-manylinux1_x86_64.whl
This is laid out really well in
AWS Lambda with Pandas and NumPy
by
Ruslan Korniichuck.