Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speech Translation using Coqui.ai on Intel Arc GPU 770 takes 23 seconds compared to CPU [3 sec] why ? #620

Closed
shailesh837 opened this issue May 7, 2024 · 8 comments
Assignees
Labels
ARC ARC GPU

Comments

@shailesh837
Copy link

Describe the issue

I am trying to run Synthesizing speech by TTS:

https://docs.coqui.ai/en/latest/
I have managed to run the below TTS code on XPU, but it takes 23 seconds , seriously, it takes 3 second on CPU :
Please can we check what issue on XPU:
(tts) spandey2@imu-nex-nuc13x2-arc770-dut:~/tts$ pip list | grep torch
intel-extension-for-pytorch 2.1.10+xpu
torch 2.1.0a0+cxx11.abi
torchaudio 2.1.0a0+cxx11.abi
torchvision 0.16.0a0+cxx11.abi

`(tts) spandey2@imu-nex-nuc13x2-arc770-dut:~/tts$ cat andrej_code_tts.py

In case of proxies, remove .intel.com from no proxies:

import os

IPEX

#import subprocess
#subprocess.run(["python", "-m", "pip", "install", "torch==2.1.0.post2", "torchvision==0.16.0.post2", "torchaudio==2.1.0.post2",

"intel-extension-for-pytorch==2.1.30+xpu", "oneccl_bind_pt==2.1.300+xpu",

"--extra-index-url", "https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"])

TTS dependency. Do it on TERMINAL:

sudo apt install espeak-ng

let's check also python can see the device

import torch
import intel_extension_for_pytorch as ipex
print(torch.version)
print(ipex.version)
[print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())]

from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer
#from IPython.display import Audio
import numpy as np
import soundfile as sf

model_manager = ModelManager()
model_path, config_path, model_item = model_manager.download_model("tts_models/en/vctk/vits")
synthesizer = Synthesizer(model_path, config_path, use_cuda=False)

Move the model to GPU and optimize it using IPEX

synthesizer.tts_model.to('xpu')
synthesizer.tts_model.eval() # Set the model to evaluation mode for inference
synthesizer.tts_model = ipex.optimize(synthesizer.tts_model, dtype=torch.float32)

speaker_manager = synthesizer.tts_model.speaker_manager
speaker_names = list(speaker_manager.name_to_id.keys())
print("Available speaker names:", speaker_names)

speaker_name = "p229" # Replace with the actual speaker name you want to use

text = "Your last lap time was 117.547 seconds. That's a bit slower than your best, but you're still doing well. Keep pushing, a really good lap is around 100 seconds. You've got this, let's keep improving."

Move input data to GPU and run inference with autocast for potential mixed precision

with torch.no_grad(), torch.xpu.amp.autocast(enabled=False):
wavs = synthesizer.tts(text, speaker_name=speaker_name)

if isinstance(wavs, list):
# Convert each NumPy array or scalar in the list to a PyTorch tensor
tensor_list = [torch.tensor(wav, dtype=torch.float32).unsqueeze(0) if np.isscalar(wav) else torch.tensor(wav, dtype=torch.float32) for wav in wavs]
# Concatenate the tensor list into a single tensor
wav_concatenated = torch.cat(tensor_list, dim=0)
else:
# If 'wavs' is already a tensor, use it directly
wav_concatenated = wavs

#Move the tensor to CPU and convert to NumPy array
wav_concatenated = wav_concatenated.cpu().numpy()

#Save the output to a WAV file
output_path = "output_vctk_vits.wav"
sf.write(output_path, wav_concatenated, synthesizer.tts_config.audio['sample_rate'])
`

@feng-intel
Copy link

feng-intel commented May 8, 2024

what's your TTS version?
Could you modify your text format?
Thanks.

@feng-intel
Copy link

feng-intel commented May 8, 2024

These are my time taken, with "wavs = synthesizer.tts(text, speaker_name=speaker_name)" repeatly several times.

 > Processing time: 3.792356014251709
 > Real-time factor: 0.23648600145432747

 > Processing time: 2.8825700283050537
 > Real-time factor: 0.18470066116133077

 > Processing time: 2.8578579425811768
 > Real-time factor: 0.17769741369426476

 > Processing time: 2.3886497020721436
 > Real-time factor: 0.153395054551173

 > Processing time: 2.447114944458008
 > Real-time factor: 0.1507433525313424

You can do warm up for several times.

Version:

intel-extension-for-pytorch     2.1.30+xpu
torch                       2.1.0.post2+cxx11.abi
torchaudio                  2.1.0.post2+cxx11.abi
torchvision                 0.16.0.post2+cxx11.abi
TTS                         0.22.0
oneapi  2024.1
'Intel(R) Arc(TM) A770 Graphics 
Model name:                         13th Gen Intel(R) Core(TM) i7-13700K

@feng-intel feng-intel added the ARC ARC GPU label May 8, 2024
@feng-intel feng-intel self-assigned this May 8, 2024
@shailesh837
Copy link
Author

@feng-intel : Please can you tell me your code how you ran this :
and make i can setup env on scratch

@shailesh837
Copy link
Author

shailesh837 commented May 8, 2024

text = "Alright, listen up. Tyres are still a bit cold, but they're getting there. Keep the pace steady and focus on getting them up to temp. We need those pressures closer to 30 psi, so keep an eye on that. Once the tyres are ready, we'll be good to go. Now get out there and give it everything you've got"

['Alright, listen up.', "Tyres are still a bit cold, but they're getting there.", 'Keep the pace steady and focus on getting them up to temp.', 'We need those pressures closer to 30 psi, so keep an eye on that.', "Once the tyres are ready, we'll be good to go.", "Now get out there and give it everything you've got"]

Processing time: 32.67794108390808
Real-time factor: 1.6533624919693377

(tts) spandey2@imu-nex-nuc13x2-arc770-dut:~/tts$ pip list | grep torch
intel-extension-for-pytorch 2.1.10+xpu
torch 2.1.0a0+cxx11.abi
torchaudio 2.1.0a0+cxx11.abi
torchvision 0.16.0a0+cxx11.abi

@feng-intel
Copy link

You can follow this page to install latest intel extension for pytorch.
https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html --> Installation page

This is my version

oneapi  2024.1
TTS                         0.22.0

intel-extension-for-pytorch     2.1.30+xpu
torch                       2.1.0.post2+cxx11.abi
torchaudio                  2.1.0.post2+cxx11.abi
torchvision                 0.16.0.post2+cxx11.abi

'Intel(R) Arc(TM) A770 Graphics 
Model name:                         13th Gen Intel(R) Core(TM) i7-13700K

@shailesh837
Copy link
Author

pip install torchaudio==2.1.0.post2+cxx11.abi --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Successfully uninstalled torchaudio-2.1.0a0+cxx11.abi
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tts 0.22.0 requires torch>=2.1, but you have torch 2.1.0a0+cxx11.abi which is incompatible.
tts 0.22.0 requires transformers>=4.33.0, but you have transformers 4.31.0 which is incompatible.

Successfully installed torchaudio-2.1.0.post2+cxx11.abi

@shailesh837
Copy link
Author

(tts) spandey2@imu-nex-nuc13x2-arc770-dut:/tts$ pip install torchaudio==2.1.0.post2+cxx11.abi --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Looking in indexes: https://pypi.org/simple, https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Collecting torchaudio==2.1.0.post2+cxx11.abi
Downloading https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/./torchaudio-2.1.0.post2%2Bcxx11.abi-cp311-cp311-linux_x86_64.whl (1.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 1.7 MB/s eta 0:00:00
Requirement already satisfied: torch in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torchaudio==2.1.0.post2+cxx11.abi) (2.1.0a0+cxx11.abi)
Requirement already satisfied: filelock in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (3.14.0)
Requirement already satisfied: typing-extensions in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (4.11.0)
Requirement already satisfied: sympy in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (1.12)
Requirement already satisfied: networkx in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (2.8.8)
Requirement already satisfied: jinja2 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (3.1.4)
Requirement already satisfied: fsspec in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch->torchaudio==2.1.0.post2+cxx11.abi) (2024.3.1)
Requirement already satisfied: MarkupSafe>=2.0 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from jinja2->torch->torchaudio==2.1.0.post2+cxx11.abi) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from sympy->torch->torchaudio==2.1.0.post2+cxx11.abi) (1.3.0)
Installing collected packages: torchaudio
Attempting uninstall: torchaudio
Found existing installation: torchaudio 2.1.0a0+cxx11.abi
Uninstalling torchaudio-2.1.0a0+cxx11.abi:
Successfully uninstalled torchaudio-2.1.0a0+cxx11.abi
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tts 0.22.0 requires torch>=2.1, but you have torch 2.1.0a0+cxx11.abi which is incompatible.
tts 0.22.0 requires transformers>=4.33.0, but you have transformers 4.31.0 which is incompatible.
Successfully installed torchaudio-2.1.0.post2+cxx11.abi
(tts) spandey2@imu-nex-nuc13x2-arc770-dut:
/tts$ pip install torchvision==2.1.0.post2+cxx11.abi --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Looking in indexes: https://pypi.org/simple, https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
ERROR: Ignored the following yanked versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.15.0
ERROR: Could not find a version that satisfies the requirement torchvision==2.1.0.post2+cxx11.abi (from versions: 0.15.1, 0.15.2a0+cxx11.abi, 0.15.2, 0.16.0a0+cxx11.abi, 0.16.0, 0.16.0.post0+cxx11.abi, 0.16.0.post2+cxx11.abi, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.17.2, 0.18.0)
ERROR: No matching distribution found for torchvision==2.1.0.post2+cxx11.abi
(tts) spandey2@imu-nex-nuc13x2-arc770-dut:~/tts$ pip install torch==2.1.0.post2+cxx11.abi --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Looking in indexes: https://pypi.org/simple, https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Collecting torch==2.1.0.post2+cxx11.abi
Downloading https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/./torch-2.1.0.post2%2Bcxx11.abi-cp311-cp311-linux_x86_64.whl (191.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 191.2/191.2 MB 3.8 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (3.14.0)
Requirement already satisfied: typing-extensions in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (4.11.0)
Requirement already satisfied: sympy in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (1.12)
Requirement already satisfied: networkx in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (2.8.8)
Requirement already satisfied: jinja2 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (3.1.4)
Requirement already satisfied: fsspec in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from torch==2.1.0.post2+cxx11.abi) (2024.3.1)
Requirement already satisfied: MarkupSafe>=2.0 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from jinja2->torch==2.1.0.post2+cxx11.abi) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /home/spandey2/miniconda3/envs/tts/lib/python3.11/site-packages (from sympy->torch==2.1.0.post2+cxx11.abi) (1.3.0)
Installing collected packages: torch
Attempting uninstall: torch
Found existing installation: torch 2.1.0a0+cxx11.abi
Uninstalling torch-2.1.0a0+cxx11.abi:
Successfully uninstalled torch-2.1.0a0+cxx11.abi
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tts 0.22.0 requires transformers>=4.33.0, but you have transformers 4.31.0 which is incompatible.

@feng-intel
Copy link

This is from Intel employee. Let's talk internally for more info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ARC ARC GPU
Projects
None yet
Development

No branches or pull requests

2 participants