File information
Created by
Cuneyt TylerUploaded by
cuneyttylerVirus scan
About this mod
Anima makes it possible to converse with NPCs with a simple integration.
- Requirements
- Permissions and credits
- Donations
FEATURES:
Engage in direct dialogues.
Broadcast message, so everybody in the room hears and responds.
Dialogues between NPCs.
Participate lectures at College of Winterhold.
A web GUI for configuring characters
Alive characters manageble through GUI. You can add some characters to be alive and they will start to talk to you in unexpected times without input.
LLM and TTS options
You can use OpenAI, OpenRouter, Groq, MinstralAI asremote sources or OLLAMA as local source.
As TTS, you can use Google TTS API (described below), or run locally (I have a separate exe for this - I suggest this option with XTTS_v2 if you have a capable PC)
If you'd like to use LLM and TTS locally, just read Installation section and skip to "Running TTS Locally" and "Running OLLAMA locally".
INSTALLATION
Download and install mod file manually or using a mod manager.
Download AnimaApp app and place it inside your Skyrim SE folder as Anima (There should be a folder named Anima in your Skyrim folder and Anima.exe and other files should be in that folder).
Open the .env file inside your Anima folder and fill SKYRIM_FOLDER and MODS_FOLDER fields. (MODS folder is where your mod is stored - if you're using mod manager go to settings in your app and see the path. If you install manually, it is the Data\ folder inside your Skyrim SE folder.)
Download LipGenerator and extract it into your Skyrim folder.
Fill in LLM_PROVIDER field in your .env file according to your desire. (OPENAI, OPENROUTER, GROQ, MINSTRALAI* or OLLAMA**)
Fill in TTS_PROVIDER field in your .env file according to your desire. (GOOGLE, LOCAL)
Fill in ..._API_KEY field appropriately. (if you use Google for LLM or TTS you should fill GOOGLE_API_KEY whether you use another service for LLM - if you use MINSTRALAI for LLM for example, you should fill MINSTRALAI_LLM_MODEL appropriately)
Fill in ..._MODEL field appropriately (this refers to LLM model if you selected OPENAI then, OPENAI_LLM_MODEL)
Configure your choice of TTS as described below
Configure your choice of LLM as described below
* Free version of MinstralAI works well, I suppose their prices aren't high too. It is also fast.
** Ollama is a package which makes it very easy to run LLMs locally.
Google AI API CONFIGURATION
Go to Google Cloud Console.
Create a new project and activate it.
Click the top-left menu icon.
Select APIs & Services -> Enabled APIs & Services.
Click Add New API.
Type "Vertex AI API" to search box and enable it. (If you want to use Google's LLM service)
Type "Cloud Text to Speech API" to search box and enable it. (If you want to use Google's TTS service)
Get a credential file as described here: Creating service keys
Save the file to a safe path and fill in GOOGLE_APPLICATION_CREDENTIALS in your .env file with it's path.
Fill in GOOGLE_PROJECT_ID with your project id.
Fill in GOOGLE_LLM_MODEL with your desired choice of llm. (gemini-1.0-pro or gemini-1.5-flash ==> gemini-1.0-pro has a higher per-minute limit for free trial accounts.)
OpenRouter Configuration
Go to OpenRouter.
Create an account or login to your existing account.
Select Keys from top-right menu.
Generate a key. Copy and paste it into OPENROUTER_API_KEY field in your .env file.
Select a model and fill in OPENROUTER_LLM_MODEL field in your .env file with it. (default: google/gemma-2-9b-it:free)
OpenAI, Groq, Minstral AI Configuration
Similar to OpenRouter. Sign up and create an API_KEY and fill in the field in .env file appropriately
Running TTS Locally
Download TTS Server in the files section and put inside a path you desire.
Run install.bat (* see NOTE below)
Select a model from below and fill in TTS_MODEL property of .env file that is in tts_server/app/flask_app.
* If you receive Access Denied error during install.bat, please do the following:
Go to C:\Users\YOUR_USER_NAME\AppData\Local\Temp
Find Winget folder
Right click, open security tab and give your user permission to read/write/execute.
Models:
1: tts_models/multilingual/multi-dataset/xtts_v2 (Best quality, you need a good GPU)
2: tts_models/multilingual/multi-dataset/xtts_v1.1 (Faster than xtts_v2, but compromises on quality)
3: tts_models/multilingual/multi-dataset/your_tts (This runs very fast even with old GPUs, but quality isn't that good)
Run app.bat each time before launching game. (In the first run, it will download the model. After it completes, you can go to http://localhost:8020 to see if it prints "Server works!")
In this case, TTS_PROVIDER should be LOCAL.
Running OLLAMA locally
Download and install Ollama.
Open cmd and run command "ollama run MODEL_NAME". See: models.
In this case, LLM_PROVIDER should be OLLAMA and OLLAMA_MODEL parameter should be set to whatever model you use in OLLAMA.
That's it! Now what you need to do is to run Anima once manually and have it source files created(wait for a minute and it'll launch). Afterwards, you can simply launch the game and Anima app will be launched automatically and provide interfaces for communicating with APIs.
**IMPORTANT**
Please save and reload at the first launch for the dialogues to be spoken.
**IMPORTANT**
Subtitles are necessary for NPCs to catch what other npcs say in-game. So if you want this feature, do not disable subtitles in game menu.
In this case, you can run hide_subtitles.txt in console (see files) using "bat hide_subtitles" command. Make sure to put hide_subtitles in your root game directory.
You can do the same with show_subtitles.txt to show subtitles
HOTKEYS
Y -> Start dialogue with NPC
U -> Send broadcast message
} -> End N2N dialogue
{ -> Hard reset
Additional Configurations
* Npc-to-Npc conversations are disabled by default. To enable them set N2N_ENABLED to TRUE.
** BROADCAST_RECURSIVE must be set to TRUE for now for broadcasting and n2n dialogues to work properly. I'll be developing a feature specifically for this.
Nether's Follower Framework Patch
There's another .esp file that provides integration with NFF. When you talk with your followers, if you tell them "stay close" they will stay close to you, if you tell them to "relax" default sandbox package will be run on them, although they wil still follow you. I may be working on this to enrich capabilities of follower dialogues.
User Profile
Profiles (for each character) are stored in location of Anima application in Skyrim Special Edition folder under Anima/Profiles directory. Under these a folder with your character name is created when you launch the first time. After launching, there'll also a profile.txt file be created under your character folder. You can fill information regarding your character that will be used in dialogues in this file.
College Lectures
You can join college lectures at College of Winterhold. It's a very cool feature, college master's will talk about their are of expertise.
Just go to the location in Hall of the Elements where Tolfdir presents his lecture on wards.
You can find College Curriculum book in your quarters.
Updating from Old Version
I suggest a clean install. Remove Anima from your data folder (or uncheck if you're using a mod manager). Run game, save and exit. Install new Anima. Run game save and reload.
WEB GUI
After launching Anima.exe, you can configure your characters in a web GUI served on http://localhost:3000
LOGS
It'd be good to see the logs if any crash happens. You can download a mod for this here: CrashLogger. These logs are stored in Documents\My Games\Skyrim Special Edition\SKSE
There are also logs generated under Documents\My Games\Skyrim Special Edition\Logs\Script folder, Anima folder and inside Skyrim SE folder(Anima.log). You can provide me these logs if any problem occurs and I'll try my best to deal with the issues.
== DEMO ==
Demo
== DISCORD ==
Anima