Digital Ocean
With the ever-increasing interest in SONAR system applications for the detection, monitoring and classification of acoustic targets and the communication with nearby participants, a plethora of different scenarios and configurations have to be considered. Manually testing these setups by deploying them in real ocean-, port- or water-tank environments can be both cost- and time intensive. As a result of this, the slow feedback loops can slow down the development of algorithms and approaches for modern SONAR systems. By building a "digital twin" of the water column and its common inhabiting acoustic targets, rapid feedback and therefore faster development and research on SONAR applications is made possible.
This demo presents such a simulated ocean environment ("digital ocean") for SONAR applications, encompassing multiple simulated acoustic targets such as ships, walls, mammals and bubbles. Each target is placed inside of a 3D environment, each having its own trajectory and frequency response to model backscatter behavior for reflections of acoustic waves. Furthermore, each target can also act as an active sound source, emitting acoustic signals from its own location. These targets are initialized with respect to the specified parameters of the digital ocean configuration in the initialization phase. Further targets can be dynamically added during runtime.
User interface of the environment simulation.
Dynamic environment processes and effects such as the simulation of the ocean surface, environment noise or wind- and water currents are also available. Their properties, such as directions of currents or the strength of the noise, are adjustable in real-time through the GUI or a priori in the digital ocean configuration. Therefore, different environment scenarios can be designed and evaluated.
The SONAR simulation processing for the projector signals happens in the frequency domain as seen below. The transformed signals are propagated through a processing chain, emulating the propagation of acoustic waves through the configured environmental channels.
System simulation processing chain GUI in KiRAT.
The actual underwater signal processing happens in the SONAR processing chain seen below. The hydrophone signals can either come from real hydrophone recordings or simulated environment signals, as the SONAR processing is decoupled from the simulation process. Therefore, the same processing can flexibly be used with a simulated environment (running as a second KiRAT instance on another PC) or plugged to the hydrophone output during a real environment evaluation. The projector transmit sequences are also generated here.
SONAR processing chain GUI in KiRAT.
The hydrophone signals are correlated with the transmit sequences to generate the echogram in the SONAR operator user interface seen below. Both the transmit- and receive-side beamformer settings can be controlled here, among other settings such as the spread control, the scaling and the data chosen to be plotted.
User interface of the SONAR operator.
Due to its adaptability and accurate real-time environment state feedback, this digital ocean also allows for autonomous agent-environment interactions for the purpose of AI training, such as for the training of a Reinforcement Learning agent. Through dynamic responses to changed SONAR scan parametrizations, an AI agent is able to explore and conclude on the environment dynamics on its own, without the need of manual labeling. Therefore, it can independently learn how to adapt to its observed environment by setting the system parametrization to fulfill a wanted goal. For the detection and classification of rising methane gas bubbles in the water column, such a goal could be the SNR maximization of the bubble targets in the received scans.