2.1 - Integrating knowledge into a model

I have looked into several directions, the CNN & GAN combination is still most interesting to me. I want to see whether, referring to the MidiNet paper, I'm able to find certain restrictions on the generation model to enhance performance of what humans think of as music they would want to hear. The so called Rencon experience, is an event where researchers gather to evaluate their results regarding music generation algorithms. This paper describes the different Rencon events, where mainly classical music is generated and evaluated. Their findings to not provide a golden standard to evaluate generative models as I hoped, so I'll have to look in another direction to find a solution for that. After looking on the internet I found another paper depicting a turing test for generative music algorithms, also called Rencon. It might prove useful, but for now I'll let it rest.

Another approach was the human hearing model. I haven't really found any mathematics that describe an exact model of human hearing. I do have my knowledge of biology which might contribute to making a model myself, but it would be preferable to find a model with mathematic background from an official research paper.

Besides these things I started coding my CNN, therefore I first have to get my data into memory and couple it to the layers I want to use in the CNN. As the data is mainly instruments it might be good to also look for other data which can make a composer or learn to play a certain style of music. And then combine both of these networks into a music making insrumentalist.

Comments

Popular posts from this blog

4.0 - More data, more data!

4.1 - Redefining overall structure

2.0 - Literature draft