Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train.py problem #227

Open
rabbia970 opened this issue Jan 6, 2022 · 2 comments
Open

train.py problem #227

rabbia970 opened this issue Jan 6, 2022 · 2 comments

Comments

@rabbia970
Copy link

rabbia970 commented Jan 6, 2022

Hi,

whenever I run train.py file using various parameters or path getting the below error. I am unable to understand the purpose of "train.txt". Please help

Command line args:
{'--checkpoint': None,
'--checkpoint-dir': 'checkpoints',
'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--data-root': './prepro',
'--help': False,
'--hparams': '',
'--load-embedding': None,
'--log-event-path': None,
'--preset': 'presets/deepvoice3_ljspeech.json',
'--reset-optimizer': False,
'--restore-parts': None,
'--speaker-id': None,
'--train-postnet-only': False,
'--train-seq2seq-only': False}
Training whole model
Training seq2seq model
[!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1
Hyperparameters:
adam_beta1: 0.5
adam_beta2: 0.9
adam_eps: 1e-06
allow_clipping_in_normalization: True
amsgrad: False
batch_size: 16
binary_divergence_weight: 0.1
builder: deepvoice3
checkpoint_interval: 10000
clip_thresh: 0.1
converter_channels: 256
decoder_channels: 256
downsample_step: 4
dropout: 0.050000000000000044
embedding_weight_std: 0.1
encoder_channels: 512
eval_interval: 10000
fft_size: 1024
fmax: 7600
fmin: 125
force_monotonic_attention: True
freeze_embedding: False
frontend: en
guided_attention_sigma: 0.2
hop_size: 256
ignore_recognition_level: 2
initial_learning_rate: 0.0005
kernel_size: 3
key_position_rate: 1.385
key_projection: True
lr_schedule: noam_learning_rate_decay
lr_schedule_kwargs: {}
masked_loss_weight: 0.5
max_positions: 512
min_level_db: -100
min_text: 20
n_speakers: 1
name: deepvoice3
nepochs: 2000
num_mels: 80
num_workers: 2
outputs_per_step: 1
padding_idx: 0
pin_memory: True
power: 1.4
preemphasis: 0.97
priority_freq: 3000
priority_freq_weight: 0.0
process_only_htk_aligned: False
query_position_rate: 1.0
ref_level_db: 20
replace_pronunciation_prob: 0.5
rescaling: False
rescaling_max: 0.999
sample_rate: 22050
save_optimizer_state: True
speaker_embed_dim: 16
speaker_embedding_weight_std: 0.01
text_embed_dim: 256
trainable_positional_encodings: False
use_decoder_state_for_postnet_input: True
use_guided_attention: True
use_memory_mask: True
value_projection: True
weight_decay: 0.0
window_ahead: 3
window_backward: 1
Traceback (most recent call last):
File "train.py", line 954, in
X = FileSourceDataset(TextDataSource(data_root, speaker_id))
File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init
collected_files = self.file_data_source.collect_files()
File "train.py", line 106, in collect_files
with open(meta, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: './prepro\train.txt'

(tf-gpu) C:\Windows\System32\deepvoice3_pytorch>python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=.\prepro
Command line args:
{'--checkpoint': None,
'--checkpoint-dir': 'checkpoints',
'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--data-root': '.\prepro',
'--help': False,
'--hparams': '',
'--load-embedding': None,
'--log-event-path': None,
'--preset': 'presets/deepvoice3_ljspeech.json',
'--reset-optimizer': False,
'--restore-parts': None,
'--speaker-id': None,
'--train-postnet-only': False,
'--train-seq2seq-only': False}
Training whole model
Training seq2seq model
[!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1
Hyperparameters:
adam_beta1: 0.5
adam_beta2: 0.9
adam_eps: 1e-06
allow_clipping_in_normalization: True
amsgrad: False
batch_size: 16
binary_divergence_weight: 0.1
builder: deepvoice3
checkpoint_interval: 10000
clip_thresh: 0.1
converter_channels: 256
decoder_channels: 256
downsample_step: 4
dropout: 0.050000000000000044
embedding_weight_std: 0.1
encoder_channels: 512
eval_interval: 10000
fft_size: 1024
fmax: 7600
fmin: 125
force_monotonic_attention: True
freeze_embedding: False
frontend: en
guided_attention_sigma: 0.2
hop_size: 256
ignore_recognition_level: 2
initial_learning_rate: 0.0005
kernel_size: 3
key_position_rate: 1.385
key_projection: True
lr_schedule: noam_learning_rate_decay
lr_schedule_kwargs: {}
masked_loss_weight: 0.5
max_positions: 512
min_level_db: -100
min_text: 20
n_speakers: 1
name: deepvoice3
nepochs: 2000
num_mels: 80
num_workers: 2
outputs_per_step: 1
padding_idx: 0
pin_memory: True
power: 1.4
preemphasis: 0.97
priority_freq: 3000
priority_freq_weight: 0.0
process_only_htk_aligned: False
query_position_rate: 1.0
ref_level_db: 20
replace_pronunciation_prob: 0.5
rescaling: False
rescaling_max: 0.999
sample_rate: 22050
save_optimizer_state: True
speaker_embed_dim: 16
speaker_embedding_weight_std: 0.01
text_embed_dim: 256
trainable_positional_encodings: False
use_decoder_state_for_postnet_input: True
use_guided_attention: True
use_memory_mask: True
value_projection: True
weight_decay: 0.0
window_ahead: 3
window_backward: 1
Traceback (most recent call last):
File "train.py", line 954, in
X = FileSourceDataset(TextDataSource(data_root, speaker_id))
File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init
collected_files = self.file_data_source.collect_files()
File "train.py", line 106, in collect_files
with open(meta, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: '.\prepro\train.txt'

@wolfassi123
Copy link

@rabbia970 did you manage to solve the issue?

@GoombaProgrammer
Copy link

you need to set the data dir to the preprocessed dir

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants