0 of 0

File information

Last updated

Original upload

Created by

MrHaurrus

Uploaded by

Haurrus

Virus scan

Some manually verified files

98 comments

  1. Haurrus
    Haurrus
    • supporter
    • 5 kudos
    Locked
    Sticky
    The initial launch should download the default model.

    If there's a problem with the initial download, you can download the needed files from: xTTS-V2 (you need to download: config.json, model.pth, speakers_xtts.pth, vocab.json).

    This is how your xtts_model folder should look like :

    └── xtts_models/
     |
     └── v2.0.2/
      ├── config.json
      ├── model.pth
      ├── speakers_xtts.pth
      └── vocab.json
  2. leosdt
    leosdt
    • premium
    • 0 kudos
    Hi guys
    I have an RTX 5080 that isn't supported by the distributed PyTorch version. I tried to force a PyTorch upgrade using pip3 inside the internal XTTS folder but then the server crashes on startup. When I try to launch it via command it acts like a new install (and will probably overwrite the updated PyTorch files).
    Any ideas on how to make it work?
    Thanks
    1. caprican314
      caprican314
      • member
      • 0 kudos
      Just got my new rig today, exact same card, exact same problem, womp womp...

      Update: After digging it could in theory be possible to run XTTS CPU bound by enabling CUDA_LAUNCH_BLOCKING=1 but I would probably advise against it as it is intended for debugging purposes and may do more harm than good so unless NVIDA finally decides to add CUDA support for the latest version of PyTorch we are SOL so to speak.
  3. Wolfes12
    Wolfes12
    • supporter
    • 4 kudos
    Does anyone know if there is a video that explains how to create the custom voice model for an NPC? I followed the written instructions with the WAV audio but I can't get the JSON file to generate.
  4. DonMod
    DonMod
    • member
    • 0 kudos
    Thank you
  5. filwu8
    filwu8
    • supporter
    • 1 kudos
    About xtts_mantella_api_server how to support  /latent_speaker_folder/zh-cn  ?
    1. filwu8
      filwu8
      • supporter
      • 1 kudos
      I have just achieved Chinese speech synthesis, but the speed is extremely slow. Can Mantella support the large-scale streaming speech recognition model? 

      exp.  https://www.volcengine.com/docs/6561/1354869
    2. filwu8
      filwu8
      • supporter
      • 1 kudos
      I have already tried creating some chinese audio files which worked well, please refer to   https://github.com/filwu8/zh-cn-Voice-For-SkyrimVR

      The demo

      5.35 07/12 [email protected] dNw:/ 当VR角色扮演游戏集成了Ai # RPG # 上古卷轴 # Ai 带上VR,打开了新世界大门,游戏还可以这么玩。  https://v.douyin.com/i5Q62MAn/ 复制此链接,打开Dou音搜索,直接观看视频!
  6. MattM1159
    MattM1159
    • premium
    • 0 kudos
    I have XTTS working on a remote PC. (Very useful app BTW) Along with that I have LM Studio. Both are working fine with MGO 3.5.2 on a different PC. The XTTS PC has a 3090Ti w 24gb of VRAM. The PC's also have 5gb/s NIC's on a 10gb switch. I was wondering if there are any tweaks that can be used to help performance of XTTS? According to the Mantella deployment guide XTTS is slower than Piper but has much better features. Any tips on tweaking for performance? 

    Thx, and well done. 
  7. alo0sandalias
    alo0sandalias
    • member
    • 0 kudos
    Hi, at the moment of try to use the mod i get this message from the mantella cmd

    21:37:51.738 TTS: Connecting to XTTS...
    21:37:53.772 TTS: Could not connect to XTTS. Attempting to run headless server...
    Traceback (most recent call last):
      File "PyInstaller\hooks\rthooks\pyi_rth_win32comgenpy.py", line 46, in <module>
      File "PyInstaller\hooks\rthooks\pyi_rth_win32comgenpy.py", line 25, in _pyi_rthook
      File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
      File "win32com\__init__.py", line 8, in <module>
      File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
      File "pythoncom.py", line 2, in <module>
        import pywintypes
      File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
      File "pywintypes.py", line 126, in <module>
      File "pywintypes.py", line 47, in __import_pywin32_system_module__
    ImportError: Module 'pywintypes' isn't in frozen sys.path ['D:\\user\\xtts_mantella_api_server-113445-1-5-1725725632\\_internal\\base_library.zip', 'D:\\user\\xtts_mantella_api_server-113445-1-5-1725725632\\_internal\\lib-dynload', 'D:\\user\\xtts_mantella_api_server-113445-1-5-1725725632\\_internal']
    [9744] Failed to execute script 'pyi_rth_win32comgenpy' due to unhandled exception!
    1. Shalashaska44
      Shalashaska44
      • member
      • 0 kudos
      I have the same issue too with , Could not connect to XTTS. Attempting to run headless server... anyone knows anything?
    2. RiitaSama
      RiitaSama
      • member
      • 0 kudos
      how did you fixed the first error?
  8. ThisIsSolidSnake115
    ThisIsSolidSnake115
    • supporter
    • 0 kudos
    Help please, i have an error message in Mantella that says:
    Waiting for player to select an NPC...
    05:58:39.541 INFO: generated new fontManager
    05:59:06.968 INFO: Running LLM with OpenAI
    Running Mantella with 'gpt-4o-mini'. The language model can be changed in MantellaSoftware/config.ini
    05:59:08.563 TTS: Connecting to XTTS...
    05:59:10.611 TTS: Could not connect to XTTS. Attempting to run headless server...
    "C:/Users/Solid" no se reconoce como un comando interno o externo,
    programa o archivo por lotes ejecutable.

    I need help please, why?
  9. AmbientMusic
    AmbientMusic
    • supporter
    • 3 kudos
    wow.. I have no idea what this does.
  10. benjael3piernas
    benjael3piernas
    • member
    • 0 kudos
    Descargue todo me va bien pero a la hora de que me respondan los npcs me responden en español pero con acento ingles y ya descargue las voces en español
    1. brayu07
      brayu07
      • member
      • 0 kudos
      me pasa lo mismo, alguien ayuda pls
    2. nachuchander
      nachuchander
      • member
      • 0 kudos
      lo mismo!!! alguna solucion?
    3. JovenOculto
      JovenOculto
      • supporter
      • 0 kudos
      Puede que sea debido a la configuración de Mantella en si, con el juego en marcha Mantella abre una pestaña de navegador con sus opciones de configuración, asegúrate de que en la sección de text to speech el motor seleccionado es XTTS y no Pipe (que es el que viene por defecto con ese acento tan cómico)
  11. JohnEmzak
    JohnEmzak
    • premium
    • 0 kudos
    Is there a way how to make work also added voice models in hosting runpod? I could run them only locally. If they are used by hosted pod I am receiving error notice about missing voice model.