Sitting Posture Identifier

HomeTimelineInfo
Information zone
Please feel free to contact me here if there are any queries.
Data Handling
NO uploaded images are being stored (permanently and temporarily) on the server. Images are processed and returned as a base64 encoded string.
Users activities on this site are not being tracked or monitored.
Algorithm
The algorithm used to estimate the posture uses YOLOv3 as the underlying model. Transfer learning was performed on top of YOLOv3 with a custom dataset and optimized for this use case (posture detection).
Backend Server
The backend was developed using Python3 and packaged using Docker. Hosted on Google Cloud Run as a Docker container, the performance improved noticeably (3 - 5s -> 0.7s per detection on average) compared to hosting on a Linux VM.
What do the results means?
The web app will return one or more of the posture result in the table below.
ResultsInterpretation
neck_good/badneck_good indicates the subject neck is straight/looking forward, whereby neck_bad usually occurs in scenarios where the subject is looking down, or tilting the head
backbone_good/badbackbone_good indicates the subject is sitting straight, whereby backbone_bad indicates the subject is slouching or leaning sideways
buttocks_good/badbuttocks_good indicates the subject is sitting straight, whereby buttocks_bad indicates the subject buttocks are sliding forwards and not perpendicular with the backbone
Process Flow
As stated in the data handling section, no information is being kept on the server. The process flow below would help you to understand why!

Upload a photo in the web app
Web app sends the uploaded photo to the backend server
Backend processes the uploaded photo into an image array and applies image enhancement/s on it
Image array is being sent to the prediction engine
Prediction engine returns the results with the image array that has bounding boxes on it
Image array is converted to base64, joined with the results as a JSON response and returned to this web app