Time-Syn­chron­iz­a­tion for Co­her­ent Di­git­al Sig­nal Pro­cessing in Wire­less Acous­tic Sensor Net­works

Sampling-time synchronization is a critical service in order to enable coherent sound fusion, separation, equalization, cancellation or localization on the basis of individual microphone signals in the discrete-time domain, e.g., Lienhart 2003, Pawig 2010, Schmalenstroer 2015. By the coherent fusion of microphone signals, for instance, we mean the aggregation of multiple observations of the same acoustic scenario, such that a target signal in the acoustic environment is coherently superimposed, while noncoherent
noise or reverberation is rejected. With only a slight desynchronization of sampling times and  frequencies, however, individual microphone signals would be interpreted on an inconsistent (e.g., dilating, or retarding) time basis. A time-invariant acoustic scene then effectively appears as a time-varying one, such that space-time adaptive signal processing never enters a desired equilibrium.

In order to provide for the required acoustic coherence of the sound source at the recording microphones, this project is devoted to small-space (i.e., in-room) applications, such as ambient assisted living arrangements, face-to-face communication supported with hearing-aids, or telecommunication to a remote site. The in-room scenario is ad-hoc in that it comprises an unspecified number of heterogeneous, networking sound sensors. They typically exhibit diverse acoustic sensitivities and qualities, according to their original purposes, for instance

  • concentrated microphone arrays as used in smart-home or gaming consoles,
  • additional single microphones in dedicated positions with good signal-to-noise ratio,
  • and mobile microphones such as in smartphones, hearing-aids, or possibly robots.

The project is then dedicated to the problem that sensor nodes with independent local A/D converters will operate at various sampling rates and especially deviate randomly from the nominal sampling frequency in the network. We will demonstrate the deteriorating effect of such inconsistent sampling on the coherent processing of microphone signals. From here, the project develops signal processing models to explain and methods to resolve asynchronous sampling.

Network synchronization protocols exist, both for computer and dedicated sensor networks, but their operation in acoustic sensor networks remains a challenge. Mostly based on message passing, the related communication overhead consumes battery power and takes radio bandwidth (or causes interference). Moreover, the transmitted time stamps are subject to considerable uncertainty in packet-oriented ad-hoc networks, where quality-of-service is not guaranteed.

The proposed project therefore utilizes the available sound waveforms directly as an information basis for detecting and correcting asynchrony of local A/D converters. That means that detection and correction will be developed as a blind, autonomous, and self-adaptive control mechanism based on the microphone signals alone. A strong synergy between acoustic transfer-function and sampling-rate alignment will be drawn, such that joint treatment of acoustic alignment and timing alignment will turn out to the advantage of one another. A similar strategy has been reported for an asynchronous acoustic echo cancellation problem in Voice-over-IP communication (see Pawig 2010).

Contact

Dr.-Ing. habil.

Gerald Enzner

Ruhr-Universität Bochum (RUB)

Telefon:
+49 234/32-25392
Fax:
+49 234/32-14165
Email:


gerald.enzner@rub.de

Contact

Prof. Dr.-Ing.

Walter Kellermann

Friedrich-Alexander-Universität Erlangen-Nürnberg

Phone:
+49 9131/85-27669
Fax:
+49 9131/85-28849
E-Mail:
Walter.Kellermann@FAU.de

Con­tact

business-card image

Dr.-Ing. Aleksej Chinaev

Communications Engineering / Heinz Nixdorf Institute

Research & Teaching