Machine Learning for Creativity and Design

NIPS 2017 Workshop, Long Beach, California, USA

Friday December 8ᵗʰ 8:00 — 18:30

Image credit: Density estimation using Real NVP

See link for accepted art submissions, music submissions, and demos for papers!

Introduction

In the last year, generative machine learning and machine creativity have gotten a lot of attention in the non-research world. At the same time there have been significant advances in generative models for media creation and for design. This one-day workshop explores several issues in the domain of generative models for creativity and design. We will look at algorithms for generation and creation of new media and new designs, engaging researchers building the next generation of generative models (GANs, RL, etc) and also from a more information-theoretic view of creativity (compression, entropy, etc). We will investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities. We’ll also hear from some of the artists and musicians who are adopting machine learning approaches like deep learning and reinforcement learning as part of their artistic process. We’ll leave ample time for discussing both the important technical challenges of generative models for creativity and design, as well as the philosophical and cultural issues that surround this area of research.

The goal of this workshop is to bring together researchers and creative practitioners interested in advancing art and music generation to present new work, foster collaborations and build networks.

Keynote Speakers

Jürgen Schmidhuber, Director & Professor at The Swiss AI Lab IDSIA

Ian Goodfellow, Staff Research Scientist, Google Brain

Rebecca Fiebrink, Senior Lecturer, Goldsmiths University of London

Ahmed Elgammal, Director of the Art & Artificial Intelligence Lab, Rutgers University

Emily Denton, PhD student, Courant Institute at New York University

Important Dates

3 November 2017: Submission date for papers and art

10 November 2017: Acceptance notification for papers and art submissions

28 November 2017: Deadline for final copy of accepted papers

4–9 December 2017: NIPS Conference

8 December 2017: Workshop

How to Participate

We invite participation in the form of papers and/or artwork.

To Submit a Paper

We invite participants to submit 2-page papers in the NIPS camera-ready format (with author names visible), to be submitted to: nips2017creativity@gmail.com

In the subject line of your email, please put:

NIPS Workshop: [Paper title]

Topics may include (but are not limited to):

On the submission page, you may also indicate whether you would like to present a demo of your work during the workshop (if applicable).

Papers will be reviewed by committee members, and accepted authors will present at the workshop in the form of a short talk, panel, and/or demo. At least one author of each accepted paper must register for and attend the workshop. Accepted papers will appear on the workshop website.

References and any supplementary materials provided do not count as part of the 2-page limit. However, it will be the reviewers’ discretion to read the supplementary materials.

To Submit Artwork

We welcome submission of artwork that has been created using machine learning (autonomously or with humans). We invite art in any medium, including but not limited to sound and music, image, video, dance, text, physical objects, food, etc… We will be able to accommodate work submitted in one of the following formats:

On this submission page, you will also be asked for a short text description of your work and a description of how machine learning was used in its creation.

Art submissions will be reviewed by committee members.

We will host an online gallery of accepted art submissions on the workshop website. While we will do our best to show a number of art pieces at the workshop itself, we will most likely not have access to adequate equipment and space to support a substantial exhibit. We may invite creators of accepted artwork to participate in the form of a short talk, panel, and/or demo.

Artists submitting work are encouraged though not required to attend in person.

Contact

If you have any questions, please contact us at nips2017creativity@gmail.com

Workshop website: https://nips2017creativity.github.io

Schedule

Time Event
8:30 AM Welcome and Introduction
8:45 AM Invited Talk
Jürgen Schmidhuber
9:15 AM Invited Talk
Emily Denton
9:45 AM Invited Talk
Rebecca Fiebrink
10:15 AM GANosaic - Mosaic Creation with Generative Texture Manifolds
Nikolay Jetchev, Urs Bergmann, Calvin Seward
10:20 AM TopoSketch: Drawing in Latent Space
Ian Loh, Tom White
10:25 AM Input parameterization for DeepDream
Alexander Mordvintsev, Chris Olah
10:30 AM Art / Coffee Break
11:00 AM Invited Talk
Ian Goodfellow
11:30 AM Improvised Comedy as a Turing Test
Kory Mathewson, Piotr Mirowski
12:00 PM Lunch
1:00 PM Invited Talk
Ahmed Elgammal
1:30 PM Hierarchical Variational Autoencoders for Music
Adam Roberts, Jesse Engel
2:00 PM Lexical preferences in an automated story writing system
Melissa Roemmele, Andrew S. Gordon
2:30 PM ObamaNet: Photo-realistic lip-sync from text
Rithesh Kumar, Jose Sotelo, Kundan Kumar, Alexandre de Brébisson
3:00 PM Art / Coffee Break
3:30 PM Towards the High-quality Anime Characters Generation with Generative Adversarial Networks
Yanghua Jin, Jiakai Zhang, Minjun Li, Yingtao Tian, Huachun Zhu
3:35 PM Crowd Sourcing Clothes Design Directed by Adversarial Neural Networks
Hiroyuki Osone, Natsumi Kato, Daitetsu Sato, Naoya Muramatsu, Yoichi Ochiai
3:40 PM Paper Cubes: Evolving 3D characters in Augmented Reality using Recurrent Neural Networks
Anna Fuste, Judith Amores
3:45 PM Open Discussion
4:15 PM Poster Session
5:00 PM End of Workshop

Accepted Papers

  1. GANosaic - Mosaic Creation with Generative Texture Manifolds
    • Nikolay Jetchev, Urs Bergmann, Calvin Seward
  2. TopoSketch: Drawing in Latent Space
    • Ian Loh, Tom White
  3. Input parameterization for DeepDream
    • Alexander Mordvintsev, Chris Olah
  4. Improvised Comedy as a Turing Test
    • Kory Mathewson, Piotr Mirowski
  5. Hierarchical Variational Autoencoders for Music
    • Adam Roberts, Jesse Engel
  6. Lexical Preferences in an Automated Story Writing System
    • Melissa Roemmele, Andrew S. Gordon
  7. ObamaNet: Photo-realistic Lip-sync from Text
    • Rithesh Kumar, Jose Sotelo, Kundan Kumar, Alexandre de Brébisson, Yoshua Bengio
  8. Towards the High-quality Anime Characters Generation with Generative Adversarial Networks
    • Yanghua Jin, Jiakai Zhang, Minjun Li, Yingtao Tian, Huachun Zhu
  9. Crowd Sourcing Clothes Design Directed by Adversarial Neural Networks
    • Hiroyuki Osone, Natsumi Kato, Daitetsu Sato, Naoya Muramatsu, Yoichi Ochiai
  10. Paper Cubes: Evolving 3D characters in Augmented Reality using Recurrent Neural Networks
    • Anna Fuste, Judith Amores
  11. AI for Fragrance Design
    • Richard Goodwin, Joana Maria, Payel Das, Raya Horesh, Richard Segal, Jing Fu, Christian Harris
  12. ASCII Art Synthesis with Convolutional Networks
    • Osamu Akiyama
  13. Combinatorial Meta Search
    • Matthew Guzdial, Mark O. Riedl
  14. Consistent Comic Colorization with Pixel-wise Background Classification
    • Sungmin Kang, Jaegul Choo, Jaehyuk Chang
  15. Compositional Pattern Producing GAN
    • Luke Metz, Ishaan Gulrajani
  16. Deep Interactive Evolutionary Computation
    • Philip Bontrager, Wending Lin, Sebastian Risi, Julian Togelius
  17. Deep Learning for Identifying Potential Conceptual Shifts for Co-creative Drawing
    • Pegah Karimi, Nicholas Davis, Kazjon Grace, Mary Lou Maher
  18. Disentangled Representations of Style and Content for Visual Art with Generative Adversarial Networks
    • Chris Donahue, Julian McAuley
  19. Repeating and Mistranslating: The Associations of GANs in an Art Context
    • Anna Ridler
  20. Generating Black Metal and Math Rock: Beyond Bach, Beethoven, and Beatles
    • Zack Zukowski, CJ Carr
  21. Generative Embedded Mapping Systems for Design
    • Tom White, Phoebe Zeller, and Hannah Dockerty
  22. Imaginary Soundscape: Cross-Modal Approach to Generate Pseudo Sound Environments
    • Yuma Kajihara, Shoya Dozono, Nao Tokui
  23. Improvisational Storytelling Agents
    • Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, Shruti Singh, Brent Harrison, Murtaza Dhuliawala, Pradyumna Tambwekar, Animesh Mehta, Richa Arora, Nathan Dass, Chris Purdy, Mark O. Riedl
  24. Learning to Create Piano Performances
    • Sageev Oore, Ian Simon
  25. Neural Style Transfer for Audio Spectrograms
    • Prateek Verma, Julius O. Smith
  26. Neural Translation of Musical Style
    • Iman Malik, Carl Henrik Ek
  27. Sequential Line Search for Generative Adversarial Networks
    • Masahiro Kazama, Viviane Takahashi
  28. SocialML: Machine Learning for Social Media Video Creators
    • Tomasz Trzcinski, Adam Bielski, Pawel Cyrta, Matthew Zak
  29. SOMNIA: Self-Organizing Maps as Neural Interactive Art
    • Byron V. Galbraith
  30. The Emotional GAN: Priming Adversarial Generation of Art with Emotion
    • David Alvarez-Melis, Judith Amores
  31. Time Domain Neural Audio Style Transfer
    • Parag K. Mital
  32. Algorithmic Composition of Polyphonic Music with the WaveCRF
    • Umut Güçlü, Yagmur Güçlütürk, Luca Ambrogioni, Eric Maris, Rob van Lier, Marcel van Gerven

Organisers

Douglas Eck, Google Brain

David Ha, Google Brain

S. M. Ali Eslami, DeepMind

Sander Dieleman, DeepMind

Rebecca Fiebrink, Goldsmiths University of London

Luba Elliott, AI Curator