A cybernetically-dreamed music video for the song "Algorithmia" by BenBen.
Do neural networks listen to simulated rock bands? Algorithms have dictated much of what we see and hear for years already; they've only much more recently become able to reproduce semblances of it on their own. With the advent of adversarial learning (pioneered by chess AIs trained through near-endless competition against eachother), digital simulacra have become more and more eerily convincing: songs extended through extrapolation, faces invented, and so many machine-generated avocado armchair designs. Especially over the last year, the tools for fully AI-created images (based only on generalized knowledge from data sets consisting of millions of captioned pictures) have rapidly evolved, diversified, and improved, allowing more and more generative artists to explore the form. But with algorithms already warping artistic production through the feedback loops of streaming and social media, what may come of vertically-integrating algorithmic distribution with its own ability to create? Or, in the deeper exploration of machine-generated art -- the interest of which comes as much from its mystifying errors and omissions as from its successes -- will the essential significance of the guiding human hand become only more obvious?