PoseNet Sketchbook is a collection of open source, interactive web experiments designed to allude to the artistic possibilities of using PoseNet (running on tensorflow.js) to create a relationship between movement and machine.
These prototypes exemplify a wide range of interactions that PoseNet can enable. Together, they make up a raw starter kit that is built to be used by dancers and developers alike.
This is not a library or code repository that intends to evolve. Instead, it is an archive of where Body, Movement, Language first began. You can use this collection of code as a starting point to create your own wacky or wild or just plain useful PoseNet experiments.
To read more about the journey from PoseNet Sketchbook to Body, Movement, Language in the blog post here.
First, clone or download this repository. For more information on how to do this, check out this article.
Next, Make sure you are in the project folder: All of the commands below should be run from your terminal on your machine.
cd posenet-sketches
Install dependencies:
yarn
To watch files for changes, and launch a dev server:
yarn watch
The server should be running at localhost:1234.
All sketches use PoseDetection.js, a wrapper class I created to handle the PoseNet data.
Each individual sketch is hosted in the 'sketches' folder.
- index.html:
- style.css: Styling for the sketch.
- assets/: The thumbnail, gif, and any additional assets used in the sketch.
- js/: The soure files.
- main.js: Set up the camera, load the video, and initialize Pose Detection and the sketch.
- sketch.js: This is where the ~ magic ~ happens. Some functions to note:
- setup: Initializes the the canvas width and height.
- initSketchGui: Sets up the GUI elements that will affect the sketch and adds them to the GUI structure.
- draw: Looping at 60 fps. Renders and updates elements on canvas with each call.
How does PoseNet interpret your pose?
How might we allow past motion to linger?
How might movement history affect text on screen?
How might movement be translated and abstracted into new forms?
How might a variety of elements collage to recreate a figure on screen?
How might spoken words manifest on screen in relation to the body?
How might body poisition be used as a controller?
How might body position surface and highlight content?
How might body position manipulate an audio experience?
Built by Maya Man at the Google Creative Lab.
This is not an official Google product.
We encourage open sourcing projects as a way of learning from each other. Please respect our and other creators’ rights, including copyright and trademark rights when present when sharing these works and creating derivative work. If you want more info on Google's policy, you can find it here. To contribute to the project, please refer to the contributing document in this repository.