Cool project, thanks for making it available. I pulled the code and the LJSpeech dataset. I prepared the dataset and began training with the default parameters, using the commands at the top of the readme. After printing the line
INFO:tensorflow:Calculate initial statistics.
Python3's memory usage grew to almost 30 GB. After the initial statistics were calculated, the memory usage dropped back to about 1 or 2 GB, and then after
INFO:tensorflow:global_step/sec: 0
It rose steadily to 60 (sixty) GB, at which point my OS killed it. Is this normal? The saved model checkpoint is only 1.2 GB.
I'm using Mac OSX 10.13.4 (High Sierra), Python 3.6.5, tensorflow 1.9.0 (cpu only), librosa 0.6.1. I had similar results on an Ubuntu 14 machine using TensorFlow GPU, where I killed the program after it reached 32 GB.
Cool project, thanks for making it available. I pulled the code and the LJSpeech dataset. I prepared the dataset and began training with the default parameters, using the commands at the top of the readme. After printing the line
INFO:tensorflow:Calculate initial statistics.
Python3's memory usage grew to almost 30 GB. After the initial statistics were calculated, the memory usage dropped back to about 1 or 2 GB, and then after
INFO:tensorflow:global_step/sec: 0
It rose steadily to 60 (sixty) GB, at which point my OS killed it. Is this normal? The saved model checkpoint is only 1.2 GB.
I'm using Mac OSX 10.13.4 (High Sierra), Python 3.6.5, tensorflow 1.9.0 (cpu only), librosa 0.6.1. I had similar results on an Ubuntu 14 machine using TensorFlow GPU, where I killed the program after it reached 32 GB.