Writing Worship Music with AI – Recurrent Neural Networks

Over the past few weeks, I’ve been toying around with a type of Artificial Intelligence (AI) called a Recurrent Neural Network (RNN). Since they’ve already been used to cooking recipes, Bible verses, Obama speeches, Mozart Music, Image descriptions, and writing prompts, I thought it would be fun to see what kind of lyrics it would produce if I fed it some popular Christian music.

I’m not sure it could quite pass as contemporvent, but it seems like something you might see in a future Church Hunters episode, and I can guarantee that Challies won’t sing them :).

3 of My Favorite AI-Produced Worship Songs

I will My see in You Sound

When I am heart, I am free
Here’s my heart, You are life
Here’s the life, I am free
You are love is peak my heart, You are life

I am heart, You are life
You are love is love is true
Here’s the wait
I am free

You are life, I am heart, I am free
You are love is heart, You are life
You are love is love is love

When I Am

Here’s the life the reach strong to see
You are love is love is promise
You are life I am true

Here’s the life, You are life
You are life, I am heart, Lord
I am life, You are life

Here’s all the reaches the wait
You are life, I am free
You are life that is love

You are life, Lord
I am heart, You are life

With His Speak

You are love is life
With I am healed the life all the bled
You’re is love is way the life
I am all the dark my heart

I am bread on Your love is
We cry strong place the with merce
You are life is heart, Lord
And strong the life, With all the reach that is pore

Update: Markov Lyrics

After this initial post, I tried another AI/text library that markovify that uses something called a Markov chain to produce text. It doesn’t handle the punctuation (line-by-line) as the RNN libraries I describe below, but the lyrics are a bit more sensible:

What this world needs is not a human right
To see if they were yours all along
You are hope for justice,
Stand firm in the sand, temporary wealth

Walls are falling down
Now the walls are falling down,
storms are closing in
And here I am born again,

Woah, Let the river flow.

How majestic is Your love is moving,
moving among us So we give You all our strength,
With all our sin
The people sing Hosanna Hosanna

in the shelter of the Lord, my God saved the day,
When my world caved to nothing
You came from God above
The Father’s only Son
Saved my soul My Life

Here I am…
King of the world…
I’m famous in my mind
take my time to set me free,

Salvation is here and it is finished.
You can shine on You I can’t live all alone.

How Were These Created?

I’m not really qualified to explain Recurrent Neural Networks (RNN) on a technical level (this is one of the best explanations), but the basic idea is a programmer can feed an RNN some data (sequential information such as images, music, text, etc.), and then the RNN “reads” it all and stores the the patterns in the input source. Based on what it’s learned it starts to be able to guess what’s next. For example, you probably know the next word in “Praise the ____” and, given enough input, an RNN can guess that next word as well.

For this experiment, I fed the Torch-RNN library (using these two installation guides) about 500 songs from the top Christian artist according to Billboard which included Hillsong, Chris Tomlin, Lauren Daigle, Casting Crowns, Mercy Me, Christy Nockels, For King and Country, and a few others.

It took a few hours to process, and then it started producing output like you see above.

Of course, the output doesn’t actually make any real sense. It sounds kind of worship-y, but it’s also complete gibberish. That’s because the RNN doesn’t truly understand or feel what it processed. It doesn’t even know the basic parts of English grammar like subject or verbs or musical concepts like a chorus or a bridge. It’s only intelligent in the sense that it grasps the patterns in the input and, based on probabilities, can generate something like what you fed it.

In this case, it seems to think that the words “life” and “heart” and “live” have a high probability of being in a song.

Limitations aside, the output is tremendously fun, and perhaps more interestingly, RNNs do a good job of demonstrating the difference between the ability to perform a complex mental task (you might call this intelligence) and higher level concepts like understanding, wisdom, or consciousness.

What about the Psalms?

I also wondered what kind of words the RNN would generate if it were fed from another source of comparable Christian content, so I fed it the 150 Psalms from the English Standard Version (ESV).

Here are some of the passages it generated:

10 The LORD all the LORD with the man a saint of the LORD,
and the LORD in the LORD is for the peoples.
11 You have the will the seet the LORD,
and the said the hand of the LORD of the LORD
and string my state and the seard of the LORD,
and be the will the sound and the proins.
15 The LORD is his praise the come of the LORD,
and his sear the rebeath of the LORD will not be the waters,
and and came his stands the reap,
the LORD is a strong the earth of the LORD

and another with few parameters (temperature from .2 to .4) adjusted:

22 The LORD,
and the righteous have seek the LORD with the LORD,
and like of the LORD his praise the will sing and the will of the will from the pries of the will people.
13 He whise in the people in the LORD all the shall not of the LORD,
and the LORD,
and the for my righteous and of the will not the earth;
the LORD and do me who sea have not the will not of the earth of the fainter of the refut the land of the hand of the piens of the will be the righteouss of the LORD in the will not the will not and the revere the will be forther of the LORD with the ining to the word of the LORD,
and the will be are a same

Update: Markov Psalms

15 He sends out his word to Jacob,
his chosen one, stood in the morning. His holy mountain,
2 beautiful in elevation, is the nation whose God is a mere breath!
10 The LORD brings the counsel of the LORD, for the dead?
4 Have they no knowledge, all the ends of the LORD, our Maker!
May prayer be counted as incense before you,
our secret sins in the land; you do not get their fill.
78 Let the insolent oppress me.
14 Satisfy us in the way I should go,
for to you the fear of you,
and let no one to bury them.
4 Even though I walk in the heavens;
your faithfulness answer me, O God, do not get their fill.

Clearly, there are some differences in the patterns.

Remember, the RNN isn’t simply counting words and then spitting them back at the same frequency. Although it does appear this way, the RNN is actually using probability to generate the next most likely character. In this case, if the character is ‘L’ then from the Psalms dataset, the RNN thinks that “L” is  more likely to be followed by “ORD” while in the contemporary songs, it’s more likely to be “ife” or “ove.”

The Psalms also have longer lines. Statistically speaking, this is because the return character occurs less frequently, and the RNN then considers a return less likely after each word.

I hope you have as much fun with this as I did.

“The righteous have seek the LORD with the LORD”

Indeed.

4 thoughts on “Writing Worship Music with AI – Recurrent Neural Networks

  1. Love this idea. I think it would be interesting to try and implement this with Google’s wavenet as well and have a truly AI worship song

Comments are closed.