Writing using Air Gestures

Github

https://github.com/infinitusposs/Writing-using-Air-Gestures

Introduction

This project is a combination of object detection and image classification based on the IOS edge device. Through the device’s camera, detecting the user’s fingertip and tracking the movement of the fingertip with a black line. And then using image classification to recognize the digits that users have written.

Instructions: one finger for drawing, two fingers for pausing drawing, three fingers for clearing the screen.

Image description

Video demo

Demo

Report

The report contains all the details about the project.

Colab Notebook

Google Colab

References

CPPN: http://blog.otoro.net/2016/04/01/generating-large-images-from-latent-vectors/
Google edge TF Lite IOS tutorial: https://cloud.google.com/vision/automl/object-detection/docs/tflite-ios-tutorial
Kaggle dataset “Fingers”: https://www.kaggle.com/koryakinp/fingers (Unused for the final version)