Personal Project

Tesla Notify

Serverless Application to provide notification alerts on changes in Tesla’s used and new vehicle inventory.

Tesla Notify image

Role:

Full-Stack Developer

technology:

Django Python Svelte Sapper Django Rest Framework TailwindCSS Node.js Webpack Docker Google Cloud Platform
Visit Site

Initially I was trying to find a Tesla Stealth Model 3 Performance Vehicle from Tesla’s online inventory. This is a rare variant that you can't order directly from Tesla, however they pop-up in the inventory now and again. As there is no way of knowing from Tesla when cars are added or updated I wrote a basic python script that would monitor Tesla’s inventory and notify me of any additions.

I wanted to learn Sapper and work more with GCP and Cloud Run - so I decided to make an SSR Application that would run locally using Docker and get deployed on Serverless infrastructure for production.

How it works

A user can search available cars from the Tesla inventory using various search criteria and filters. They are able to create a notification alert based on the search query; if not logged in they just need to supply an email address and click the activation link that is sent to them.

The system is Passwordless, to login in the user needs to enter their email address and click an activation link that is sent to them. The activation link contains a single use verification token that once validated via the API, will return an access token that is stored in a session on the node.js server; this access token allows the users client to perform authenticated requests to the API.

The backend worker instance retrieves car data from Tesla and updates the car database with changes, additions and removals. When complete it calls another process which will match any updated or new cars with active notification alerts, any matches will result in an email being sent to the user containing a grouped summary of cars.

The Setup

Tesla Notify Stack
Tesla Notify Stack

This diagram gives an idea of how the project is structured, there is a separation of concerns between the front and backend, everything is running in the cloud decoupled on Serverless architecture.

Backend

The project is using these services on the Google Cloud Platform for the backend:

  • Cloud Run
  • Cloud Tasks
  • Cloud SQL
  • Cloud Scheduler
  • IAM access
  • Container Registry

The two main application services - API and Worker are docker container images that run on Cloud Run deployed via Container Registry and configured with Environment Variables. Instances on Cloud Run are stateless they can be public (unauthenticated) or require authentication for access. It’s best that a container instance only runs a single service on one port.

Create and Deliver Tasks
Create and Deliver Tasks

The API service handles all requests from the frontend client and server, requests should be dealt with quickly and background tasks should be handled in an asynchronous manner. So for any background tasks such as sending an access token, the API will create a task in a Cloud Tasks queue which in turn gets delivered to the worker service which can process the task. Normally I would use Celery with RabbitMQ to handle asynchronous tasks however, within the Google Cloud Platform, Cloud Tasks works well as an alternative and is probably less overhead.

As well as handling tasks from the API the Worker instance can run a job to retrieve cars from Tesla and update the database. This is triggered from Cloud Scheduler which can be setup like a cron job. Like Cloud Tasks, Scheduler can call secure HTTP targets using an OIDC Token.

The great thing about Cloud Run is that it can automatically scale up to multiple instances and can also scale to 0. So if no requests are coming in after sometime no instances will be running and you won’t need to pay for usage. Both the API and Worker service connect to a Cloud SQL relational database which is running PostgreSQL, service accounts can automatically handle secure access between Cloud Run and Cloud SQL.

One issue that I noticed with Cloud Run is that Python Multithreading doesn’t seem to work at the moment. Multithreading is very useful for speeding up web scraping where you are making lots of http requests, it’s something I had running locally that would fail when run on a Cloud Run instance.

Frontend

One of the main reasons for this project was to learn about Sapper - it is an application framework similar to Next.js but for the Svelte UI compiler. It works with Node.js as an express compatible middleware and does some really clever things like SSR (Server Side Rendering), Code Splitting and routing based on file structure. Writing components in Svelte is very efficient and making applications in Sapper can be quite rapid.

As well as Svelte and Sapper the frontend uses the TailwindCSS framework, this is all built with Webpack which purges any unused CSS resulting in a very lightweight application.

Performance

Fireworks from Lighthouse Performance Audit
Fireworks from Lighthouse Performance Audit

Using SSR with Sapper makes it a really good choice if SEO is important. Running Google’s Lighthouse Audit I managed to get 100 for SEO and each of the other metrics, in theory this is really good for search indexing and general user experience.

Conclusion

Although quite a simple project I was able to make use of some interesting technologies without even having to setup a server.

Google’s Cloud Run is excellent for running microservices, here I’ve used it for serving an API and running processes like web scraping or dealing with background tasks that need to run outside of a normal request. It’s very easy to set up and deploy a containerised app to Cloud Run also it works well with other cloud services like Cloud SQL, Tasks, Scheduler and IAM access.

Sapper is great for making snappy SSR sites that have good SEO performance, it works very well on Zeit’s Now serverless platform making it a cost effective solution.

More Projects