_____/\/\______/\/\____/\/\__/\/\/\/\/\/\____/\/\/\/\______/\/\__/\/\____
___/\/\/\/\____/\/\____/\/\______/\/\______/\/\____/\/\__/\/\/\/\/\/\___
_/\/\____/\/\__/\/\____/\/\______/\/\______/\/\____/\/\__/\/___/\/______
_/\/\/\/\/\/\__/\/\____/\/\______/\/\______/\/\____/\/\_________________
_/\/\____/\/\____/\/\/\/\________/\/\________/\/\/\/\___________________
________________________________________________________________________
_/\/\______/\/\______/\/\______/\/\____/\/\__/\/\/\/\/\/\____/\/\__/\/\_
_/\/\__/\__/\/\____/\/\/\/\____/\/\____/\/\__/\____________/\/\/\/\/\/\_
_/\/\/\/\/\/\/\__/\/\____/\/\__/\/\____/\/\__/\/\/\/\/\____/\/___/\/____
_/\/\/\__/\/\/\__/\/\/\/\/\/\____/\/\/\/\____/\/\_______________________
_/\/\______/\/\__/\/\____/\/\______/\/\______/\/\/\/\/\/\_______________
________________________________________________________________________
Final - Auto~Wave~Machine~ - 12/01/2021
🎸
In the first class of this semester, I introduced myself with a photo with a bunch of guitars. And yes, playing music and modifying equipment has always been a passion of mine.
In this final project, I want to combine what I have learned with music. I know many existing projects use AI to compose music, but I attempted to make my project more interactive, or more playful.
🧩
I landed on the question of how co-creation be like for music performance. A good precedent here is the works from Sougwen Chung.
She collaborates with the robotic arm to create the sketches, as the robotic arm is learning her gestures.
It reminds me of the postures in musical performance: the body sways slightly when playing jazz, while we also have the “headbanging” for metal.
🤔
Firstly I use the Teachable Machine to train a simple classification model. I then want this model to switch different tone settings on my effect pedalboard.
I planned to use the Raspberry Pi the run the TensorFlow lite version of it. And I would use servo motors to physically press the footswitch on my pedals. However, since I noticed that this classification model is over heavy to support the lite version.
So I came up with a new solution: why won’t we make it more dynamic?
🎛
There are a lot of knobs on each effect pedal, which is what makes them so fascinating! By adjusting these values, we can create infinite combinations of tones.
But during the performance, we only use the footswitch to control the on-off of each pedal. My hands are playing! They're too busy to adjust the knobs!
Thus, I began to design a pedal that would adjust the tone in real-time according to my performance posture.
5️⃣
The figure above shows the technical map of this device. I will add a module on a tremolo effect pedal (tremolo is a kind of swaying sound). Three servo motors will turn the knobs automatically.
As I am running the PoseNet detection model on p5.js, I use the serial port to communicate the p5 and the microcontroller. When my body is tilting left to right, the value of Depth, Shape, and Rate will be changed according to the angle detected.
🕹
For the appearance, I designed an acrylic case to cover the original pedal. The control servos are also fixed to the case. I used an ASCII generator to create an ASCII art style logo and engrave it on the case.
Let's Jam!
Credits:
Teachable Machine
PoseNet
p5.serialcontrol
ASCII Generator
Midterm - Haeckel's New Findings - 10/06/2021
🖋
Through my previous discussions with T, we shared a focus on creating something related to biology.
We first wanted to generate some biological microstructures to create an underlying, fundamental new specie, and mix the habitat features of our input data species as well.
Then because we found a set of image data of Haeckel's illustration, we switched our plan.
🕵🏼♂️
Ernst Haskel is one of the scientists and artists I admire most. His spirit was a great encouragement to me during my undergraduate studies on design morphology.
More introductions have been put into the presentation.
📕
We have taken 52 illustrations from Kunstformen der Natur (Art Forms in Nature, Ernst Haskel, 1924) as the inputs for GAN (partially).
By training in Playform's Freeform, the following results were obtained (partially).
A combination of different textures can be seen in the generated results, forming some chaotic organisms.
And the symmetrical features of the input image are no longer present, which is related to the library trained by Playform.
🔧
For the physical part, I imported one of the generated images into Rhinoceros, by using grayscale-pointclouds method, which in turn generated some 3D textures.
Through 3D printing, we can get a fossil-like texture.
A little note about teamwork: My teammate T originally took charge of the printing part, and T told me the printing task had already in progress. However, as T did not show up in class, and I could not contact T until now. So the physical part is right now in a kind of Schrödinger's state: I don't know if it has been made or not. I have planned to print this model again on Sunday, October 17th, by myself. I will continue to try to get in touch with T.
Finally get it!!! - 10/27/2021
Physical Pattern - The Illusion - 09/29/2021
🔍
During the observation last Wednesday night, I was attracted by two patterns: The first is the pattern produced by the superposition of light. The second is the bipinnate leaf pattern of the plant.
About Pinnation (Special Thanks to Yibo)
🎨
I tried to combine these two elements. I followed the growth logic of bipinnate to create an interactive frame.
I transferred the illusion into the alpha value of the pattern. And the final result by p5.js is shown above.
🔧
For the physical part, given that I finally finished the laser cutting orientation, I wanted to cut a series of transparent acrylic sheets and assemble them according to the logic of the bipinnate.
Then it is possible to create such dynamic illusion in the real world.
Physical Gan - PoopsGan - 09/21/2021
Check the Result on Playform Here!!!
🐦
Project Background: “Tsingcrow University” is a project I have continuing running date bake to 2019 on the Tsinghua University Campus.
A short story of this project’s name is because of the flocks of crows that inhabit the campus, and their excrement causes a lot of environmental problems to the residents.
This project intends to explore the subject-object relationship between humans and crows: to whom does this space belong? How can the conflict between humans and nature be reconciled?
📷
During this project, I have collected a big amount of pictures of crows’ droppings (sort of disgusting). These images are perfectly fit for a Gan database.
Because I have no access to collect more images (obviously I am in New York!), I hope that the algorithm can be used to continue the work that has not been done before.
📡
I uploaded 32 images of poops on the Playform. After 67 steps of training. It shows up several results. In terms of the process, Gan shapes what he perceives as "color", and “texture" step by step.
The final results also have the characteristics of the original photos.
⏺
From a semiotic point of view, the AI has done a good job of summarizing the signifieds of this series of images and generating these visual symbols very "accurately".
But in this process, did the signifiers of the images change? Does this process add a new meaning to the objects (crows’ poops)? Or does the synthesis itself reduce the meaning of the original image?
👀
For the materialized part, I would like to print these composite photos as stickers.
Then stick them to the original place where these poops were recorded. In this way, the "real" and "virtual" exist in a mix.
But currently it looks like I won't be able to do it myself, I may need to delegate it to one of my friends!
Physical Models - Chaotic Admissions Letter - 09/15/2021
Check the Admission Letter Here!!!
🎓
The 2021 application season was not a success for me. Only two of the seven programs I applied to were accepted,
which of course included the one from Parsons Design Technology! When I was asked to collect some texts that could be generated,
these letters came across my mind. Interestingly, these rejection letters express the same content, but are very richly worded.
Maybe they just don't want to make these rejection so heartless. Although it is indeed heartbreaking...
📮
I mixed all my rejection letters and admission letters together, trying to find some unexpected results there.
Fortunately, I got accepted to my dream school! (Of course I won't tell you which one my dream school is, after all,
I am in the newest school across the world!)
To give this generated letter a more ceremonial feel, I typeset it in AI and printed it.
Surprisingly, I found that those PDF documents were protected by encryption and I did not have permission to copy the text in them.
Although I don't know what the point of this is, I also encrypted my PDF!
👨🏼💻
In the process of using this pre-trained model, I found that there are still some limitations on Markov.
In order to make the letter more realistic, I planned to generate the admission officers' names too.
However, this model is better at handling logically complete sentences, and it does not simply mix words.
👋🏼
Overall, I am happy that I was accepted by my dream school! See you, Parsons!