The project aims to explore the combination between emergent patterns created through computer code and the potential of generating new imagery using StyleGAN – a variant of Generative Adversarial Networks (GAN). Intrigued by the topic of morphogenesis and deeply impressed by the “magic” of complex patterns emerging from a simple set of rules and formulas, I decided to explore the reaction-diffusion system and at the same time, I incorporated the Face-API to allow people to generate their reaction diffusion patterns by moving their heads around in front of a camera. Then each user can save their patterns at any time. A curated dataset of reaction diffusion system patterns is formed and created by each user in real-time. Later, this dataset is fed to the StyleGAN2-ADA model that allows for training GANs with limited data and various augmentation options. By doing so, I have an opportunity to re-look at these emergent patterns through the lens of machines and investigate how machine learning interprets these patterns and gives them new forms.