Directional Steering of Audio Signal Using Parametric Array in Air

DOI : 10.17577/IJERTV3IS10578

Download Full-Text PDF Cite this Publication

Text Only Version

Directional Steering of Audio Signal Using Parametric Array in Air

Sreedhu T Sasi, Anaswara V Nath, Archana V R

College of Engineering,Cherthala

Abstract

Directional Audio is very recent technology that creates focused beams of sound .By projecting sound to one location, specific listeners can be targeted with sound without other nearby hearing it.The parametric loudspeaker provides an effective means of projecting sound in a highly directional manner without using large loudspeaker arrays to form sharp directional beams.Development of parametric loudspeaker in many public places reduce noise pollution.Digital Signal Processing plays a significant role in enhancing the aural quality of the parametric loudspeakers,and array processing can help to shape and steer the beam electronic electronically.

T

  1. Introduction

    he controllable audible sound can greatly reduce noise pollution in public places. For example, in the library, a persononal announcement system can

    help to communicate with a group of people without disturbing others.The directional sound system in museums can provide localized sound for those who wish to hear it, and quiet for everyone else.

    The parametric array has been studied widely in the context of underwater sonar and to a lesser extent in air.It exploits an effect known as self-demodulation to extremely directive low-frequency waves, which would otherwise require an enormous array of ultrafrequency transducers.Self-demodulation occurs when non- linearities of a compressible medium cause high frequency wave components to interact.This interaction produces new frequencies at the combination of sums and differences of their individual frequency components.With the development of high-power transducers and signal processing techniques,the parametric array has been exploited for audio applications. By using the nonlinear interaction of sound beams, Yoneyama et al. designed a novel directional parametric loudspeaker in 1983. Array processing techniques[39] can also be applied to form and steer the demodulated sound beam electronically. This electronic beam control is advantageous by allowing the mounting of the parametric loudspeakers

    directly on the wall without the need for a mechanical pan-and-tilt system.

    .

  2. Prior art

    In 1963, Westervelt described how an audible difference frequency signal is generated from two high- frequency collimated beams of sound. These high- frequency sound beams are commonly referred to as primary waves. The nonlinear interaction of primary waves in mediums (such as air and water) produces an end-fire array of virtual acoustics sources that is referred to as the parametric array. The primary interest for the parametric array focuses on the difference- frequency signal that is being created along the axis of the main beam (or virtual end-fire array) at the speed of sound. This phenomenon results in a sharp directional sound beam of audible signal. Berktay [4] extended Westervelts analysis to spherically and cylindrically spreading sources to derive a simple expression for predicting the far-field array response. In contrast, the only way to produce an end-fire array of audible acoustics sources is to use a large array of conventional loudspeakers lining up directly in front of each other in the shape of a long column. But this approach is very costly, bulky, and impractical. Therefore, parametric array provides a practical way of projecting a very narrow sound beam in air. The use of the parametric array in the air was first verified experimentally by Bennett and Blackstock [5]. Since then, the parametric array in air has been developed, and the device that generates this phenomenon is generally referred to as a parametric loudspeaker. In 1982, Yoneyama et al. used the parametric loudspeaker, which is made up of 547 PZTs and a modulation circuit to generate broadband audio [6]. They introduced the term audio spotlight for audio applications of the parametric array. Their experiments revealed that the demodulated sound wave generated by the nonlinear acoustic phenomena has a very sharp directivity pattern. However, the demodulated signal suffered from high harmonic distortion, low electrical to acoustic conversion, and poor frequency response.

    In 1984, Kamakura et al. [7] reduced the distortion with double-sideband amplitude modulation (DSBAM) by preprocessing the modulating signal. Similarly, Pompei introduced a practical device in 1998 [8], which adopted the preprocessing technique proposed by Kite et al. [9]. Their approaches involved square-rooting the modulating signal, which reduced harmonic distortion and improved frequency response as compared to the DSBAM, but at the cost of requiring very wide bandwidth (>10 kHz) ultrasonic emitters. About the same time, Croft and Norris [10] reported a similar device with proprietary algorithms and emitters and commercialized their devices. Several recent studies of the parametric loudspeakers [11] [13] have resulted in new insights and observations. In addition, new processing techniques [14][16] have also been developed to further reduce the distortion, enhance the perceptual quality of the parametric loudspeakers, and to provide constant beamwidth for the possible range of beamsteering angles

    In this paper, we shall highlight the steerable audio system using parametric array in air and validate its performance through subjective tests. This paper is organized as follows. Section 3 presents the theoretical overview for directing audio. Section 4 and 5 gives design approach of loudspeaker array. Section 6 provides the simulation results to validate proposed beamsteering algorithm. Section 7 discusse some practical limitations for the real time implementation. Lastly, Section 8 concludes this paper

  3. Theory

    1. Non-linear Acoustics

      A nonlinear restoring force on a displaced molecule generate sum and difference (combination) tones.This concept was first introduce by Helmholtz. These were not subjective tones but ones that actually existed in the air. He said that the springs that keep air molecules spaced apart exhibit a non-linear restoring force characteristic that manifests itself at higher displacement amplitudes. Helmholtzs theory and formulas predicted results that initially seem to match what Helmholtz had measured. Since two primary frequencies (in our case, ultrasonic ones) are generating new frequencies in the air, the shape of the primary wave must change as it propagates.Since two primary frequencies (in our case, ultrasonic ones) are generating new frequencies in the air, the shape of the primary wave must change as it propagates. Fourier tells us that any wave can be described with a series of sines and

      cosines. If one emits two high-amplitude sine waves, as in the Helmholtz example above, new frequency terms appear, and the shape of the wave train changes. The accepted mechanism for this propagation distortion is explained by A.L. Thuras, R.T. Jenkins, and H.T. ONeil of Bell Labs in a 1934 paper called Extraneous Frequencies Generated in Air Carrying Intense Sound Waves.The following explanation is taken from this paper, and from a similar paper by L.J. Black, A Physical Analysis of Distortion Produced by the Non- Linearity of the Medium, J. Acoust.(1940). It turns out that if equal positive and negative increments of pressure are impressed on a mass of air, the changes in the volume of the mass will not be equal. The volume change for the positive pressure will be less than the volume change for the equal negative pressure.

      This phenomena ay be unfamiliar to those in the relatively linear acoustics field of audio. The wave equation which is customarily used in the solution to acoustical problems is valid for small signal propagation only. The assumption involved in the derivation of the small signal wave equation is that the maximum displacement of the air particles, x, be small compared to the wavelength ; x < . In other words, the pressure fluctuations are so small that the specific volume appears to be a linear function of pressure. When this is not satisfied, a plane wave or even a spherical wave propagated in the medium will not preserve its shape. As a result, the magnitude of the fundamental decreases and the magnitude of the distortion increases with propagation distance. A simple explanation of this phenomenon is given by L.J. Black. Each part of the wave travels with a velocity that is the sum of the small signal velocity and the particle velocity. The maximum condensation in a wave is at the point of maximum pressure and this portion of the wave has the greatest phase velocity. The fact that the phase velocity is greater at the peak of the wave than at the trough results in a wave whose shape changes continuously as it is propagated.

      Assuming the normal small signal velocity of c = 344m/s and working through the equations with a sound pressure level of 140dB (20µPa) signal,the figure 1 shows the shape of a wave which has been distorted by the mechanism explained above.

      Fig. 1. Pressure fluctuations of a wave

      The blue line represents a pure sinewave (a single- frequency signal); the red line represents the shape of the same wave after it has propagated through the non- linear medium for a time. A high-amplitude sinewave tends to form into a sawtooth wave as it travels. The sawtooth wave contains odd and even harmonics. The 2nd harmonic is fully half the amplitude of the fundamental. This means that strong harmonics are created during the propagation of a high intensity tone. In the two-tone case, f1 and f2, it can be shown that the harmonics of each will appear, as will the sum and difference frequencies, f1+f2 and |f1-f2|. This is the most simple case of a parametric acoustic array.

    2. In 1963, Berktays analysis led to a simplified model, which is known as the Berktay far-field model, is widely used to approximate the nonlinear sound propagation. His model provided a simple expression, which can be used to predict the far-field array

    3. Audio Preprocessing

      Signal pre-processing has three aims: amplitude modulation, distortion reduction, and transducer response compensation. In this paper, we consider the transducer to have an ideal response, so we are only interested in amplitude modulation and distortion reduction.

      If classical amplitude modulation is used, the envelope function is given by f = 1+ms(t), where s(t) is the audible signal to be transmitted. The Berktay equation (Eq. (1.1)) shows that the demodulation wave is proportional to the second derivative of f 2, therefore to obtain s(t) as a self- demodulation wave in the far field we have to use the modulation function

      2

      response of the parametric loudspeaker. The expression states that the demodulated signal (or audible difference

      s1(t)=

      1 s(t)dt

      (0.2)

      frequency) pressure p2(t) along the axis of propagation is proportional to the second time-derivative of the square of the envelope of the amplitudemodulated ultrasonic carrier as follows:

      This processing is the ideal. However the square root of a signal has an infinite spectrum, while a transducer has a limited bandwidth. In practical terms, in order to have a self-

      p2(t)

      p02a2

      4

      d E

      (0.1)

      demodulated wave with a satisfactory distortion

      rate, the transducer must have a bandwidth that is

      2

      2

      160c0 z0 d 2

      where is coefficient of nonlinearity for air, p0 is pressure amplitude at the ultrasound source, a is source radius, 0 is ambient density, c0 is small-signal sound speed, z is coordinate along the axis of the beam, 0 is absorption coefficient in air, is retarded time , and E() denotes the modulation envelope of the ultrasonic carrier. Equation (1.1) shows that the demodulated signal is proportional to the size of the ultrasound source, a; the pressure amplitude of primary wave, p0; and the amplitude of the envelope function, E(). Therefore, a higher audible (demodulated) sound pressure at a distance can be achieved by increasing the values of these three parameters. The Berktays model is able to predict the performance of the parametric array in air and provides important guidelines in designing suitable parametric loudspeakers for different applications.

      at least four times larger than the highest frequency of s(t). This constraint can be difficult to fulfil when complex signals, e.g., music, have to be transmitted. Another solution is to use a single side band amplitude modulation (SSB). In this case the self-demodulated signal has less distortion than when classical amplitude modulation is used without other processing. To decrease the residual distortions, it is possible to simulate the self-demodulation, extract the distortions and correct the signal, as shown in Fig.

      2. The advantage of this correction is that it does not increase the necessary bandwidth and can be used in iterative processing. These two processing methods give good results, but both have constraints: if the transducer has sufficient bandwidth the first method can be used, but if the transducer bandwidth is narrow and the calculation time is not a problem, then the second method is

      better.

      Fig 2 Correction of distortion

    4. Beam Forming

      A beamformer[32] is a spatial filter that processes the data obtained from an array of sensors in a manner that serves to enhance the amplitude of the desired signal wave front relative to background noise and interference. Signals from a particular angle or a set of angles are enhanced by constructive combination and noise from other angles is rejected by destructive interference. Spatial discrimination capability depends on the size of the spatial aperture, as the aperture increases, the discrimination improves. For computation of the delays the sensors need to be represented in a three dimensional co-ordinate system, i.e. each sensor position is represented by a 3D vector.

      Consider a group of M sensors located in

      the space, whose position vector is given by( ri ); i

      Fig 3 Plane wave impinging on a uniform linear array

      1. Conventional Beamforming

        The simplest approach to beamforming is conventional delay and sum beamforming. The under lying idea is very simple: if a propagating signal is present in an arrays apperture, sensor outputs ,delayed by appropriate amounts add together ,reinforce the signal with respect to noise or waves propagating in different directions. The delays that reinforce the signals are directly related to the length of time it takes for the signal to propagate between sensors.Fig .4 shows the block diagram of the conventional delay and sum beamformer in time domain. Here, inputs from each sensor are shifted so that the signals are aligned in time and are then added. However the delays are generally not integer multiples of the sampling period T sec,we cannot form sums that involve sensor signals

        delayed by non integer multiple of T .To reduce the

        = 1,2:M. Each(

        ri ) is a 3D vector representing

        aberrations introduced by delay quantization, we can

        the x, y, z co-ordinates of the sensor, with reference to the origin.

        interpolate between the samples of the sensor signals.

        Lets(n)= e j 2 fn / fs

        (0.3)

        be the complex sinusoid prpagating through the medium with a unit direction vector u . The time delayed signal received at the ith sensor is

        xi(n)=exp[j(2fn/fs+kriu)];i=1,2.M (0.4) where k=2f/c

        Fig 4 Conventional Delay Sum eamforming

      2. Adaptive Beam forming

Adaptive beamforming optimizes a collection of weight vectors to localize targets via correlation with the data in a noisy environment. These weight vectors

generate a beampattern that places nulls in the direction of unwanted noise (i.e., signals, called interference, from directions other than the direction of interest). In contrast to conventional beamforming where the weight vector is a constant and independent of incoming data

,Adaptive Beamforming (ABF) algorithms use

primary (wave) source with Gaussian amplitude shading is assumed. By using quasi-linear theory[23] , the far-field directivity function of the primary wave solution with a Gaussian source is derived as

4

D (k,)=exp[ 1 (ka)2 tan2 ] (0.5)

1

information about the cross spectral density matrix (CSM) to compute the weights in such a way so as to

improving the beamforming output.

Minimum variance distortion less response is an adaptive algorithm which will minimise the output in all directions subject to the condition that gain in the steering direction is unity. The steering direction is the bearing that the array is steered towards to look for a particular incoming signal. This algorithm gives optimum performance by steering nulls in the direction of interference and also offers better performance in the case of correlated noise sources.Fig 5 shows the comparison between conventional and delay sum beamforming steered at 60 and 550.

where is the angle with respect to the axis of the beam and a is the source radius. For a bifrequency Gaussian source, the far-field directivity of the difference frequency (or demodulated signal) D_() can be described as the product of the primary waves directivities of

D_()=D1a()D1b() (0.6)

where D1a() and D1b() are the primary beam directivities at frequency a (which is also the carrier frequency) and b (which is the modulating frequency), respectively. Note that the bifrequency Gaussian source ignores the frequency dependence of the attenuation coefficient in air.

Consider a group of M-weighted primary sources that are equally spaced, with d meters between adjacent sources. The far-field directivity of the weighted primary source array for frequency a is given by D1(ka,)H(ka,), where D1(ka,) is the aperture

M 1

directivity, and H(ka,)=1/M( an

e j (nd /c)sin

) is

Fig 5 Conventional and MVDR beamformer output

The beam pattern of the demodulated signal can be controlled by array processing techniques. To simplify the equation for studying the beam pattern, a

  • SreedhuT Sasi is currently pursuing masters degree program in Signal Processing in Cochin University Of Science And Technology,India,

  • Anaswara V Nath is currently pursuing masters degree program in Signal Processing in Cochin University Of Science And Technology,India,

  • Archana V R is currently pursuing masters degree program in Signal Processing in Cochin University Of Science And Technology,India,

n0

the far-field array response. Here, wan is the nth emitter weighting for n = 0, 1, 2, M-1. Similarly, the far-field directivity for primary frequency b is given as D1(kb,)H(kb,). Hence, in the case of beamforming Gaussian sources, the beam pattern for the audible demodulated signal can be estimated as

D_()=D1(ka,)H(ka,)D1(kb,)H(kb,) (0.7)

An algorithm has been proposed[29] in to control the sidelobe level of the demodulated signals directivity, forming a beamformer with constant beamwidth for the difference frequency in parametric loudspeakers.

A single set of weights wan and weighting response vector Wbn( e j ),n=1,2..,M-1 associated to carrier frequency and modulated broadband frequency, respectively, can be computed using the Chebyshev window weighting function with a specified amount of sidelobe attenuation. Note that the weight response

vector Wbn( e j ) associated to the modulated broadband frequency is a frequency-dependent function to achieve a broadband beamformer. Beamsteering in parametric loudspeakers can be extended from the previous constant beamwidth beamformer structure by adding delays a0 and b0 to the carrier frequency and the sideband frequency, respectively, as shown in Figure 6. Since SSB is used in this beamsteering structure, either the lower sideband modulation or the upper sideband modulation output can be derived from the nth digital-to-analog converter (DACn). For LSB, the output from the nth DAC is given as

LSB,n(nT)=0.5{ancos[a(t-na0)]+

bncos[(-_)(t-nb0)]} (0.8)

where a and _ are the angular frequencies of the carrier frequency and difference frequency, respectively. The main design issue of the beamsteering algorithm is the selection of a set of effective delays for both carrier and sideband frequencies to determine the direction of beamsteering. Since the carrier frequency is a fixed single frequency, delay for the carrier frequency can be computed as

d

Fig 6 Proposed Beamsteering Algorithm

Transducer arrays can be nearly any shape: linear, circular, rectangular, or even spherical[15]. A one- dimensional array allows beamforming in one

dimension; additional array dimensions allow for 2-

a0 c sin _

(0.9)

dimensional beamforming. Given the limited number of microphones and amount of time we have, a linear

where _ is the desired steering angle of the difference frequency. As the steerable delay of the difference frequency is solely determined by the carrier signal, due to the product directivity principle given in (1.9), the delay for the sideband frequency, b0, can be rounded to the nearest integer multiple of the sampling period that is closest to the desired steering angle for simple implementation. The objectives of the carrier frequencys weighting function, wan, are to control the difference frequencys beamwidth and to attenuate the carrier frequencys sidelobes. The function of the sideband frequencys weighting function, wbn, is to generate a flat directivity response over a range of angles across all audible frequencies such that the difference frequencys sound pressure level is the same for different steering angles. An additional objective of this weighting function is to attenuate the sideband frequencys sidelobes such that the generated difference frequency has lower sidelobe directivity.

array is the best choice.

Transducer spacing: The spacing of the transducer is driven by the intended operating frequency range. For spatial filtering, a narrower beam width is an advantage, because signals which are not directly from the intended direction are attenuated. A narrow beam width is analogous to a narrow transistion band for a trandiretional filter. Lower frequencies will correlate better with delayed versions of themselves than high frequencies, so the lower the frequency, the broader the beam. Conversely, a longer array will result in a greater delay between the end microphones, and will thus reduce the beam width. At the same time, the spacing between microphones determines the highest operating frequency[33]. If the wavelength of the incoming signal is less than the spacing between the microphones, then spatial aliasing occurs. At the same time, the spacing between microphones determines the highest operating frequency. If the wavelength of the incoming signal is less than the spacing between the transducers, then spatial aliasing occurs[16,18]. The spacing between microphones causes a maximum time delay which,

together with the sampling frequency, limits the number of unique beams that can be made.

maxbeams=2.Fs.timespacing (0.10)

The variable timespacing is the maximum amount of time it takes for sound to travel from one microphone to an adjacent microphone, as the case when the source is along the line created by the array.

  1. Fig 7 Difference frequency (10kHz) beamsteering for =100

  2. However, there are two major physical limitations of the parametric loudspeaker due to its sound generation principle. Firstly, during the self- demodulation process caused by the parametric array effect, higher harmonic components of the original sound are generated as by-products. Secondly, this self- demodulation process shows a high-pass filtering effect, resulting in a very poor bass quality of the parametric loudspeaker. Research over the last two

    decades have mainly been focused on reducing harmonic distortions using different preprocessing techniques and controlling the beam patterns of the parametric loudspeaker.

    One way to rectify the second problem is to augment the parametric loudspeakers with conventional loudspeakers or subwoofers. In other words, we can channel mid- and highband frequency content to the parametric loudspeakers and leave the low-band frequency content to the sub woofer. However, this approach incurs higher cost and requires additional space to house the subwoofers, and it is not appropriate for portable devices. Another approach is to recreate the sensation of low-frequency tones by introducing a harmonic series of its overtones without the presence of the physical fundamental (low) frequency. This psychoacoustics phenomenon is known as the missing fundamental [25] and can be readily implemented using signal processing. A nonlinear function, which can be easily implemented digitally, is usually used to create the harmonic series of its overtones, which are added into the highpass filtered signal to create a perceptually bass-rich sound track . Studies on how different nonlinear functions can affect the low- frequency perception of sound have been reported and applied to parametric loudspeakers with some success. However, new transducer technology with larger diameters must be realized to achieve a better bass perception to compete with conventional loudspeakers in terms of low-frequency quality.

    In addition, there are several research challenges on the beam control of parametric loudspeakers. These topics include the distribution and arrangement of the ultrasonic emitters forming different configurations of ultrasonic emitter arrays to enhance the directivity patterns of different frequency bands; complexity reduction using different array configurations; grating lobes elimination in parametric loudspeakers, and the phase response of the parametric array effect in air.

The parametric loudspeaker provides an effective means of projecting sound in a highly directional manner without using large loudspeaker arrays to form sharp directional beams. It can be augmented with conventional loudspeakers to create a more immersive audio soundscape. Deployment of parametric loudspeakers in many public places where private messaging can make a difference in attracting attention, conveying messages without needing headphones, and creating private listening zones to reduce noise pollution. Digital signal processing plays a significant role in enhancing the aural quality of the parametric loudspeakers, and array processing can help to shape and steer the beam electronically. In addition, other signal processing techniques can also be applied to add more flexibility and improve the performance of parametric loudspeakers. These developments rely heavily on the latest techniques in acoustics and audio signal processing to overcome some of the current limitations in nonlinear acoustics modeling and ultrasonic transducers technology. A useful feature in sound projection is to realize a highaccuracy digital beamsteering capability in air using an array of parametric loudspeakers. An in-depth study into the theoretical model of wave steering capability in parametric array in air can provide some hints on how we can best steer the demodulated signal in an efficient manner. As seen from this article, digital signal processing provides the main engine to achieve directional sound projection, and new digital processing techniques will be devised to provide a better quality, controllable audio beaming, and efficient sound focusing device in the future.

ACKNOWLEDGEMENT

The authors would like to thank the Associate Editor and anonymous referees for their helpful comments

REFERENCES

  1. Woon-SengGan,Ee-Leng Tan, and Sen M. Kuo, "Audio Projec tion",IEEE Signal Processing Magazine pp. 43-57,

    January 2011

  2. T.Kamakura,K.Aoki, and S.Sakai," A highly directional audio system using a parametric array in air",in Proc. 9th Western Pacific Acoustics Conf.,Seoul,Korea,June 2006,pp.1-8.

  3. P. J. Westervelt,Parametric acoustic array,J. Acoust. Soc. Amer., vol.35, no. 4, pp. 535-537, 1963.

  4. H.O.Berktay," Possible exploitation of non-linear acoustics in underwater transmitting applications," J. Sound Vib., vol. 2, no. 4, pp. 435-461,1965.

  5. M.B.Bennett and D.T.Blackstock, "Parametric array in air," J. Acoust.Soc. Amer., vol. 57, no. 3, pp. 562-{568,1975

  6. M. Yoneyama and J. Fujimoto, "The audio spotlight: An application of nonlinear interaction of sound waves to a new type of loudspeaker design," J. Acoust. Soc.Amer., vol. 73, no. 5, pp. 1532-1536, 1983.

  7. T.Kamakura,M.Yoneyama and K.Ikegaya,"Development of parametric loudspeaker for practical use",in Proc.Int.,Nonlinear Acoustics,pp.147- 150,1984.

  8. F. J. Pompei,"The use of airborne ultrasonics for generating audile sound beams," J. Audio Eng. Soc., vol. 47, no. 9, pp. 726-731, 1999.

  9. T. D. Kite, J. T. Post, and M. F. Hamilton,"Parametric array in air: Distortion reduction by preprocessing,"in Proc. 16th Int. Congress on Acoustics, 1998, vol. 2, pp. 1091-1092.

  10. J. J. Croft and J. O. Norris,"White paper on hypersonic sound,"American Technology Corporation, 2001/2002, pp. 1- 28.

  11. W. Kim and V. W. Sparrow,"Audio application of the parametric array implementation through a numerical model",in Proc. 113th Convention of Audio Engineering Society, Los Angeles, CA, Oct. 58, 2002, pp. 1-16.

  12. D. Olszewki,"Targeted audio",in Computers in the Human Interaction Loop, A. Waibel and R. Stiefelhagen, Eds. London: Springer-Verlag, 2009, pp. 133-141.

  13. J. Yang, K. Sha, W. S. Gan, and J. Tian, "Modeling of

    _nite-amplitude sound beams: Second order _elds generated by a parametric loud- speaker",IEEE Trans. Ultrason., Ferroelectr., Freq. Contr., vol. 52, no.4, pp. 610-618, 2005.

  14. T. Kamakura, K. Aoki, and Y. Kumamoto, "Suitable modulation of the carrier ultrasound for a parametric loudspeaker",Acoustica, vol. 73, no. 4, pp. 215-217, 1991

  15. C. M. Lee and W. S. Gan,"Bandwidth-e_cient recursive pth-order equalization for correcting baseband distortion in parametric loudspeak- ers",IEEE Trans. Audio, Speech Lang. Processing, vol. 14, no. 2, pp. 706-710, 2006.

  16. E. L. Tan, P. F. Ji, and W. S. Gan,"On preprocessing techniques for bandlimited parametric loudspeakers",Appl.

    Acoust., vol. 71, no. 5, pp. 486-492, Dec. 2009

  17. Holosonic

    audiospotlight[Online]:http://www.holosonics.com/

  18. Sennheiser audio beam[Online]:http://sennheiserusa.com/

  19. Mitubishi Electric MSP- 50E[Online]:http://www.mee.co.jp/

  20. Y. Nakashima, T. Ohya, and T. Yoshimura, "Prototype of parametric loudspeaker on mobile phone and its acoustical characteristics", in Proc. 118th Audio Engineering Society Convention, Barcelona, Spain, May 2831, 2005, pp. 1-6.

  21. Y. Roh and C. Moon,"Design and fabrication of an ultrasonic speaker with thickness mode piezoceramic transducers",Sens. Actuat. A: Phys., vol. 99, no. 3, pp. 321- 326, 2002.

  22. Chris Kyriakakis,Panagiotis Tsakalides and Tomlinson Hol- man,"Acquisition and Rendering Methods for lmmersive Audio Sur- rounded by Sound",IEEE Signal Processing Magazine,1053-5558,Jan 1999.

  23. Hamid Krim and Mats Viberg,"Two Decades Of Array Signal Processing Research",IEEE Signal Processing Magazine,July 1996,pp 67-94.

  1. M. F. Hamilton and D. T. Blackstock, Nonlinear Acoustics. San Diego, CA: Academic, 1998.

  2. J. Yang, W. S. Gan, M. H. Er, C. M. Lee, K. S. Tan, Y.

    H. Lew, F. A. Karnapi, K. Sha, and Y. Wang, "Steering of directional sound beams", U.S. Patent, pp. 1-31, December 5, 2006.

  3. H. Fastl and E. Zwicker," PsychoacousticsFacts and Models 3", Berlin: Springer-Verlag, 2007

  4. Wen-Kung Tseng,"Beam Width Control For A Directional Audio Sys- tem",International Journal Of Innovative Computing,Information and Control,Vol 9,July 2013,pp 3069-3078

  5. F. A. Karnapi and W. S. Gan,"Method to enhance low frequency pe ception from a parametric loudspeaker",in Proc.

    112th Audio Engineering Society Convention, Munich, Germany, May, 2002, pp. 1-5.

  6. K. S. Tan, W. S. Gan, J. Yang, and M. H. Er,"Constant beamwidth beamformer for di_erence frequency in parametric array," in Proc. 2003 Int. Conf. Acoustics, Speech, and Signal Processing, Hong Kong, China, April, 2003, pp. 1- 4.

  7. W. S. Gan, J. Yang, K. S. Tan, and M. H. Er,"A digital beamsteerer for difference frequency in parametric array",IEEE Trans. Audio, Speech, Lang. Processing, vol. 14, no. 3, pp. 1018-1025, 2006.

  8. E. L. Tan, W. S. Gan, and J. Reuben,"Augmented audio system", U.S.Provisional Patent, US 61/298,187, Jan. 2010.

  9. Chuang Shi, Hao Mu, and Woon-Seng Gan,"A Psychoacoustical Pre-processing Technique For Virtual Bass Enhancement Of The ParametricLoudspeaker",ICASSP 2013

  10. D. Johnson and D. Dudgeon,Array Signal Processing Concepts and Techniques, Prentice Hall, Upper Saddle River,

    NJ, 1993

  11. America Technology Corporation (ATC). HyperSonic sound [Online]. Available: http://www.atcsd.com/

  12. Tadashi Matsui, Daisuke Ikefuji, Masato Nakayama and Takanobu Nishiura,"A design of audio spot based on separating emission of the carrier and sideband waves",Proceedings of Meetings on Acoustics, Vol. 19, 055- 049 (2013)

  13. Tomoo Kamukura and Kenichi Aoki,"A Highly Directional Sound System Using Parametric Array In Air",The 9th Western Pacific Acoustics Conference,Korea,2006

  14. Jun Yang, Woon-Seng Gana, Khim-Sia Tan, Meng-Hwa Er,"Acoustic beamforming of a parametric speaker comprising ultrasonic transduc- ers",J. Yang et al., Sensors and Actuators A 125 (2005) 91-99

  15. Wen-Kung Tseng,"Beam Width Control For A Directional Audio Sys- tem",International Journal Of Innovative Computing,Information and Control,Vol 9,July 2013,pp 3069-3078

  16. Anna Rozanova-Pierrat,"Mathematical analysis of Khokhlov- Zabolotskaya-Kuznetsov (KZK) equation",Laboratoire Jacques-Louis Lions, Universite Paris VI,September 13, 2006

  17. H.L. Van Trees,Optimum Array Processing Part IV of Detection, Estimation and Modulation Theory, John Wiley and Sons Inc., New York, 2002

Leave a Reply