Adaptive Reverb
Adaptive reverb is one of those tools that are fun to talk about, but lacks practical implementation. For this project, I thought that I would develop several methods for calculating reverb for a simple room, and see which is most efficient.
My first attempt looked something like this:
- Line trace (Raycast) to surrounding to detect objects with the environment tag (separate script for outdoors)
- Cast rays forward and behind camera to a set quantity limit Raycasting interval timed to walk/run timer Return distance 3d vector
- Sum vectors to get room dimensions
- Calculate room volume
- Send volume to FMOD Default is lwh, but could potentially add slider for different room types
- Parameters go to reverb snapshot, which remaps the values to EQ,
Naturally, the quickest way to implement this would be in Blueprints.
Room volume Player location front camera raycasting (rinse/repeat)
This script worked very quickly, and was easy on computation resources. That being said, it doesn’t take most than a few details of the room into account, and is only really accurate for simple, box shaped rooms. In light of this, I thought it might be smart to delegate this task elsewhere.
For the uninitiated, Project Acoustics (nee Project Triton) is an acoustics system that calculates soundwave behaviour ahead of runtime, using Azure: Project Acoustics FAQ. By taking scene geometry and converting it into voxelized materials, it can provide accurate acoustic parameters for the sound designer to use, regardless of the scene complexity, at low CPU cost.
Update: The developers decided to move the Project Acoustics project to being an internal resource on June 4th, 2024:
Acoustic Parameter List
Name | Value | Range |
---|---|---|
Occlusion | float | {0,2} |
Transmission | dB | {-60,60} |
Wetness | dB | {-24,24} |
Decay Time Scale | float | {0,2} |
Outdoorness | float | {-1,1} |
Perceptual Distance Warp | float | {0,2} |
Given the granularity of control availabilty over the baked acoustic simulation, I thought that it would be be worthwhile to try and design an interactive scene around them, as suggested. That’s why I decided to read all of the provided Microsoft Game Stack documentation and implement it into Unity myself.
This article has a (delisted) companion video here.
Interaction Audio Controller Script
Suppose that we have a scene with a variety of objects of different shapes and sizes. Some are interactable, and can be manipulated by the player, where others cannot. The objects that are interactable need to be identified somehow, so let’s attach audio sources to each of them, to be activated when the player is within a 2m radius.
Raycasts.cs
//Check for objects within a 2m distance
void CheckForInteractable()
{
Ray _ray = new Ray(m_cam.transform.position, m_cam.transform.forward);
RaycastHit _hitInfo;
bool _hitSomething = Physics.SphereCast(_ray, raySphereRadius, out _hitInfo, rayDistance, interactableLayer);
if (_hitSomething)
{
m_interacting = true;
audioSources[0] = _hitInfo.transform.GetComponent<AudioSource>();
PlaySound();
}
else
{
m_interacting = false;
}
}
//Play sound(s) while the player is within a 2m sphere around the object.
void PlaySound()
{
LayerMask Player = LayerMask.GetMask("Player");
if (Physics.CheckSphere(transform.position, objectSphereRadius, Player) && !audioSources[0].isPlaying)
{
audioSources[0].Play();
}
else if (!Physics.CheckSphere(transform.position, objectSphereRadius, Player) && audioSources[0].isPlaying)
{
audioSources[0].Pause();
}
}
Objects of Interest Script
With that out of the way, we can make things more precise. What if, for example, only objects that remain ‘in focus’ will have their signature sound come to the fore, while others go to the back? The idea being that we use the acoustic parameters given to us to make this as natural as possible.
ObjectsOfInterest.cs
public class ObjectsOfInterest : MonoBehaviour
{
public bool requireFieldOfView = true;
public GameObject Camera;
public float verticalFieldOfView = 30;
public float horizontalFieldOfView = 60;
public float maximumInteractiveDistance = 5f;
public LayerMask interactiveLayers;
private AudioSource[] audioSources;
// Start is called before the first frame update
void Start()
{
LayerMask interactiveLayers = LayerMask.GetMask("Interactable");
}
// Update is called once per frame
void Update()
{
List<GameObject> interactiveObjects = findInteractiveObjects();
//printInteractiveObjects(interactiveObjects);
audioSources = InteractiveObjectsAudio(interactiveObjects);
if (Physics.CheckSphere(transform.position, maximumInteractiveDistance, interactiveLayers))
{
FadeInMultiple(audioSources, 10.0f);
} else {
FadeOutMultiple(audioSources, 10.0f);
}
}
//Find all interactive objects within the player's FOV
public List<GameObject> findInteractiveObjects()
{
Vector3 distance, adj, vHyp, hHyp;
float hAngle, vAngle;
List<GameObject> interactiveObjects = new List<GameObject>();
List<GameObject> noninteractiveObjects = new List<GameObject>();
// Determine Camera View
Transform FOV;
if (Camera != null)
FOV = Camera.transform;
else FOV = transform;
// Find current colliders
Collider[] proximityObjects = Physics.OverlapSphere(FOV.position, maximumInteractiveDistance, interactiveLayers.value);
foreach (Collider col in proximityObjects)
{
distance = col.transform.position - FOV.position;
adj = Vector3.Dot(distance, FOV.forward) * FOV.forward;
vHyp = distance - (Vector3.Dot(distance, FOV.right) * FOV.right);
vAngle = Mathf.Rad2Deg * Mathf.Acos(adj.magnitude / vHyp.magnitude);
hHyp = distance - (Vector3.Dot(distance, FOV.up) * FOV.up); ;
hAngle = Mathf.Rad2Deg * Mathf.Acos(adj.magnitude / hHyp.magnitude); ;
//Ensure they are in the Field of View
if ((hAngle <= horizontalFieldOfView || vAngle <= verticalFieldOfView) || !requireFieldOfView)
{
GameObject interactiveObj = col.gameObject;
if (interactiveObj != null)
interactiveObjects.Add(interactiveObj);
noninteractiveObjects.Remove(interactiveObj);
}
else
{
GameObject interactiveObj = col.gameObject;
interactiveObjects.Remove(interactiveObj);
noninteractiveObjects.Add(interactiveObj);
FadeOutMultiple(InteractiveObjectsAudio(noninteractiveObjects), 10.0f);
}
}
return interactiveObjects;
}
//Print a list of all the interactive objects in view
public void printInteractiveObjects(List<GameObject> interactiveObjects)
{
foreach (GameObject gameObject in interactiveObjects)
{
Debug.Log("Object Tag: " + gameObject.tag);
}
}
//Return an array with all of the audio sources of the interactive objects in view
public AudioSource[] InteractiveObjectsAudio(List<GameObject> interactiveObjects)
{
foreach (GameObject gameObject in interactiveObjects)
{
audioSources = gameObject.GetComponents<AudioSource>();
}
return audioSources;
}
//Fades in all of the audio sources that have been enabled by being in view
public void FadeInMultiple(AudioSource[] audioSources, float FadeTime)
{
if (audioSources != null && audioSources.Length != 0)
{
for (int i = 0; i < audioSources.Length; i++)
{
if (audioSources[i].isPlaying == true && audioSources[i].volume < 0.3)
{
audioSources[i].volume += 1 * Time.deltaTime / FadeTime;
}
}
}
}
//Fades out all of the audio sources that have been disabled by NOT being in view
public void FadeOutMultiple(AudioSource[] audioSources, float FadeTime)
{
for (int i = 0; i < audioSources.Length; i++)
{
if (audioSources[i].volume > 0)
{
audioSources[i].volume -= 2 * Time.deltaTime / FadeTime;
}
}
}
}
Environmental Audio Controller
Additionally, we can manage a list of tagged Environmental sound objects, which will retreat from the soundscape whenever any interactable object is within focus.
EnvironmentalAudioController.cs
public class EnvironmentalAudioController : MonoBehaviour
{
#region Variables
public GameObject audioListener;
public LayerMask Player;
public float maxDistance = 0f;
public float sphereRadius = 0f;
private float _objectDistance;
public GameObject[] environmentalObjects;
public AudioSource[] audioSources;
public AcousticsAdjust[] acousticsAdjust;
public AcousticsAdjustExperimental[] acousticsAdjustExperimental;
public AudioMixer environmentalMixer;
public AudioMixerSnapshot[] forwardSnapshots;
public AudioMixerSnapshot[] backwardSnapshots;
public float mixerThreshold = 50;
public float[] weights;
#endregion
// Start is called before the first frame update
void Start()
{
environmentalObjects = GameObject.FindGameObjectsWithTag("Environmental");
foreach (GameObject enviroObject in environmentalObjects)
{
audioSources = enviroObject.GetComponents<AudioSource>();
acousticsAdjust = enviroObject.GetComponents<AcousticsAdjust>();
acousticsAdjustExperimental = enviroObject.GetComponents<AcousticsAdjustExperimental>();
}
}
// Update is called once per frame
void Update()
{
RaycastHit outInfo;
LayerMask Player = LayerMask.GetMask("Player");
Vector3 objectVector = audioListener.transform.position - transform.position;
bool hit = Physics.Raycast(transform.position, objectVector, out outInfo, maxDistance, Player);
//Debug.DrawRay(transform.position, objectVector, hit ? Color.green : Color.red);
if (hit)
{
_objectDistance = getDistance(outInfo);
if (_objectDistance < 2.0f)
{
float updateAmount = 0.25f * Time.deltaTime;
foreach (AcousticsAdjustExperimental _acousticsAdjustExp in acousticsAdjustExperimental)
{
_acousticsAdjustExp.IncreasePerceptualDistanceWarp(updateAmount);
//Debug.Log("Increasing Distance Warp!");
}
BlendSnapshots(_objectDistance);
} else
{
float updateAmount = 0.25f * Time.deltaTime;
foreach (AcousticsAdjustExperimental _acousticsAdjustExp in acousticsAdjustExperimental)
{
_acousticsAdjustExp.DecreasePerceptualDistanceWarp(updateAmount);
//Debug.Log("Decreasing Distance Warp!");
}
unBlendSnapshots(_objectDistance);
}
}
}
float getDistance(RaycastHit outInfo)
{
float objectDistance = outInfo.distance;
return objectDistance;
}
public void BlendSnapshots(float distance)
{
//Debug.Log("Blending snapshots forward!");
weights[0] = distance;
weights[1] = mixerThreshold * (2 * distance);
environmentalMixer.TransitionToSnapshots(forwardSnapshots, weights, 0.2f);
}
public void unBlendSnapshots(float distance)
{
//Debug.Log("Blending snapshots backward!");
weights[0] = distance;
weights[1] = mixerThreshold / distance;
environmentalMixer.TransitionToSnapshots(backwardSnapshots, weights, 0.2f);
}
}
Result
As an alternative to an adaptive reverb implementation, this is quite effective aesthetically and is leagues ahead in terms of performance. Additionally, I found that the ability to modify the degree to which each acoustic source follows the acoustic characteristics of the room to be useful in sonically highlighting objects of interest.