Devlog: Gesturine

Previously I was working on a side project which was a gesture controlled game, it got pretty boring because I think I was not interested in building a game ATM but just the gesture control system for playing games on the system. At that time also I has this on back of my head that I should build a desktop app instead which can be used as gesture controls for the game. I think currently as well my focus will be on building the desktop app by integrating all the required modules and maybe tweaking here and there.
day 1
technical setup
I think I will start with narrowing down the scope and discussing the parts of project with some AI tools.
update: So I went through bunch of articles on how to vibe code a project and ended up discussing my requirements with Claude and then generated a plan.md after it. This will be the baseline used by all my agents and AI tools going forward. For now I will be sticking with Claude and Copilot with claude.
plan: I think I will be reusing some part of setup from my previous project TypeFast, since requirements are bit similar here as well and I used electron for the desktop app there as well. I remember previously there were some issue with some boilerplates and their compatibility with certain key binding libraries I was using, probably because those libraries were outdated.
plan: I am thinking of using Zustand(learning purpose too) for state management of the project even though Its not strictly required for this project as per current simple requirements of this project.
thought: Its been a long time since I worked on PC, generally use my laptop on the go, NodeJS is outdated to use https://electron-vite.org/ , updated to node 20.19.4 just now.
I have settled on mediapipe for gesture detection in realtime over tensorflowJS.
day 2
I had a lot of stuff going on yesterday so couldnt continue further after inital setup
update: Got mediapipe service for gestures detection via claude. Integrated and going to test the flow now
status: currently I dont have cam on pc so setting up my old phone as cam via droidcam. My laptop is not good enough I feel for this task, old beatup thinkpad. Maybe I will try continuing on laptop once I reach a certain stage - by phase 7 maybe?
thought: AI assisted coding is addicting, so many bugs and if you are on free tools then its hell for you to debug but code generation is really good. My tasks focused prompts with specific details produced some really good code. I havent put much structure around the project I think I should have completed the tasks from phase 1 properly so then I'd have a good structured project to work with. Anyway for now the pace is Good
status: gesture getting recognized, building a settings screen so as to configure the gestures to keybindings. I havent thought through and no proper design I am letting claude take me, one bad thing happened though claude free credit for the day are over :/ Also claude for copilot credit for the day is over switched to gpt-4.1. I think a push for local small LLM is actually needed for cheapskates like me - I should explore that realm next whats the big updates in that area.
status: screen basic UX is created
day3
- status: I have setup state management store, keyboard binding service, now going back to what should have been done initially in phase 1 lol - setting up debugger for my electron app.
- started debuging and fixing bugs in the current codebase, found some really stupid issues, Requirement was vague, it did the job but half part of it seemed to forget what it was doing and handled the state differently, so i think if you give me more clear path of execution it will do better. But nonethless for first draft of application its quite good, so for now I am debugging and fixing any potential issues and then will proceed. Also for UX I am currently not sure what i will use saw Daisy UI but not sure if I need full fledged framework, tailwind is installed so maybe I will take a look at some shadcn components and templates and use whatever is needed in the UI for now AI generated UI is pretty good.
- cleaning up the UI and making system usable, currently major functional tasks from phase 2 and phase 3 are clear now, I will be moving to non functional requirements of these phases.
- I have updated the readme and added MIT license lol
- fixes are still underway,the first draft will be ready soon for this then I will work on completing all the pending points of phase2 and 3