3D AudioSense
  • Home
  • About
  • Press
  • Contact
  • Blog

3D AudioSense BeagleBone Black Cape

31/3/2015

 
The purpose of developed 3D AudioSense Cape is to record surrounding audio synchronously along with other capes. Each cape, along with its BeagleBone host, becomes an audio sensor. Multiple sensors forms a network which effectively creates a wireless distributed microphone array. Recorded audio stream is transmitted over WiFi connection while the synchronization is provided by a separate low-latency radio channel. 

A single 3D AudioSense Cape consists of following components:
  • Dual built-in electret microphones along with low noise preamplifiers.
  • Optional dual external electret microphone connector.
  • TexasInstruments TLV320AIC3104 audio A/D and D/A converter.
  • ISM band wireless radio receiver for synchronization and control.
  • 2 kHz resonant “buzzer” for audio pulse emissions.
  • Lattice MachXO2 FPGA for audio & synchronization signal processing.
Obraz
Obraz
Obraz

What the 3D AudioSense Cape allows to do

  • The developed 3D AudioSense Cape is capable of audio signal recording with a resolution of 16 bits and sampling rate of 48 kHz for both microphones. It is sufficient for most professional and amateur recording purposes. 
  • Built-in microphones are characterized by 80dB signal to noise ratio. For more demanding users there is a single 3.5mm Jack connector that allows to attach an external microphone pair. A hardware switch is used to select between built-in and external microphone for each channel independently. 

Wireless synchronization 

What makes the 3D AudioSense Cape different from a typical audio cape is an ability of wireless synchronization with a master reference clock signal. The synchronization allows multiple capes to capture the sound field synchronously with no more than a single sample phase deviation. This unique feature makes recorded sound possible to be processed by algorithms which allow sound source separation, localization and so on. The synchronization signal along with recording control information is transmitted wirelessly from a single master device over a dedicated ISM band low-latency radio channel. A dedicated master device is responsible for an appropriate clock signal generation for the whole sensor network. Control commands are also sent by it. This allows very high flexibility of microphone array arrangement.

In order for the captured audio data to be useful, precise position of all microphones need to be known.  3D AudioSense Cape is equipped with a buzzer that allows it to send audio pulses. These pulses allows other capes to localize it by measuring sound propagation time. Precise localization is possible thanks to the wireless synchronization of all capes.

Detailed hardware description

The microphone preamplifier was specially designed for operation in a presence of electrical noise generated by digital circuitry. Both analog and digital components share the same power source which turned out to be a significant noise generator for the weak microphone audio signal. Both low and high frequency noise filters were applied to the power of the analog part of the cape. A special care was taken to the PCB layout in order to separate analog and digital parts and signal tracks from each other. The result is high quality audio signal free from any unwanted digital signal interference.

Analog signals from microphone preamplifiers are connected to line inputs of the audio converter chip. The analog input of the A/D converter provides analog programmable gain amplifiers (PGA) which can be used to adjust signal levels just before digitalization.

The same audio chip contains D/A converter with built-in headphone amplifier. Output of the amplifier is connected directly to a small speaker. The speaker is a 2kHz resonant buzzer which task is to transmit short audio pulses of that frequency. The connection of the buzzer to the D/A converter allows to precisely control shape of transmitted waveform.

The radio module operates in ISM frequency band, therefore no radio licensing is required. A special care was taken to the radio module selection. Due to necessity of reference clock transmission, the radio module cannot perform any channel coding and data encapsulation in packets. This ensures low-latency signal propagation from the master transmitter device to each cape. The drawback of such solution is higher susceptibility to interference. Fortunately appropriate FPGA modules togeather with analog PLL device are there to eliminate any impairments to synchronization clock and data transmission.

The FPGA chip is what binds everything together. The use of an FPGA is necessary due to presence of external wireless synchronization. FPGAs are an excellent choice when it comes to time critical signal processing. The key task of the FPGA device is decoding of incoming synchronization signal by separating audio clock from control data stream. Timestamps, received through the wireless channel, are then embedded into audio stream that comes from the A/D converter. The resulting data stream is then sent to the host BeagleBone board via SPI interface.

3DAudioSense - Polish Product of the Future 2014 award

10/3/2015

0 Comments

 
Obraz
We are proud to announce that 3D AudioSense was awarded in the competition 
Polish Product of the Future.

The competition’s objective is to promote and disseminate information on achievements of innovative techniques and technologies which have the opportunity to be applied on the Polish market. The competition is intended for innovative enterprises, research and development units, scientific institutes, research centres and also for individual inventors from EU Member States. 

The award ceremony took place on the 1st of December 2014 in Warsaw. The ceremony was attended by Vice-Minister and Minister of Economy – Janusz Piechociński and the Chairperson of the Polish Agency for Enterprise Development – Bożena Lubańska-Kasprzak.
AudioSense Polish Product of the Future
File Size: 113 kb
File Type: pdf
Download File

0 Comments

Multichannel wave encoder now available in Gstreamer

4/2/2015

0 Comments

 
WAV is one of the most popular audio format in use. The bit stream structure of a .wav file consist of a fixed-size header (44 bytes) and a payload containing PCM samples. Crucial parameters like sample rate, duration of a recording, bit depth, number of channels, compression code or bit stream are kept in wav header. Payload section carries  only sound samples for each channel in interleaved or non-interleaved order.

Gstreamer is fully compatible with mono, stereo or multichannel streaming. Unfortunately, wavenc, standard element does not support encoding more than two channels. This functionality is desired in audio system based on multichannel recording. 3DAudioSense belongs to this group, so  we solved this issue by implementing additional wavnchenc element, thus we enabled gstreamer to produce multichannel wave files.

Imagine situation where application operates on  samples from 9 independent sound sources. The interleave element produces stream of raw output data as a combination of interlaced frames. Wavnchenc constructs the wav header from data gathered from the input element and pushes it to  the front of streaming buffers. Finally, the output file can be played back e.g. in Audacity. Exemplary pipeline is presented below:

gst-launch interleave name=il \
filesrc location=./object/object_0.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_0 \
filesrc location=./object/object_1.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_1 \
filesrc location=./object/object_2.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_2 \
filesrc location=./object/object_3.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_3 \
filesrc location=./object/object_4.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_4 \
filesrc location=./object/object_5.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_5 \
filesrc location=./object/object_6.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_6 \
filesrc location=./object/object_7.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_7 \
filesrc location=./object/object_8.wav ! wavparse ! audioconvert ! \
"audio/x-raw,rate=48000,channels=1,format=S24LE,layout=interleaved" ! il.sink_8 \
il.src ! "audio/x-raw,rate=48000,channels=9,format=S24LE,layout=interleaved" ! \
wavnchenc ! "audio/x-wav" ! filesink location=multichannel.wav

Obraz
It is possible to test our implementation of wavnchenc plugin. Just download it from our git site and follow the installation commands. Our official github side is[Zylia-RnD].

0 Comments

BeagleBone Black - Building Kernel and deploying it with new system distribution/Flashing onboard eMMC from microSD card

25/7/2014

3 Comments

 
Advertisement:
Picture

BeagleBone Black - Building Kernel and deploying it with new system distribution/Flashing onboard eMMC from microSD card

  • Get your cross compiler:

sudo apt-get install gcc-arm-linux-gnueabihf
  • Create your work directory and get  scripts for building kernel from Robert Nelson git repository:

sudo mkdir beaglebone
cd beaglebone
sudo git clone git://github.com/RobertCNelson/linux-dev.git
cd linux-dev# checkout v3.8 branch

sudo git checkout origin/am33x-v3.8 -b tmp
cd ..
  • Get the latest stable linux kernel (it will be downloaded to linux-stable directory):

sudo git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  • Check configuration in /linux-dev/system.sh file:

sudo cp system.sh.sample system.sh
sudo nano system.sh
  • Change/uncomment the following lines:

#ARM GCC CROSS Compiler:

CC=<your path to arm-linux-gnueabi- files>//usually CC=/usr/bin/arm-linux-gnueabihf-

#LINUX KERNEL GIT:

LINUX_GIT=<your path to linux-stable directory>#LINUX KERNEL START ADDRESS

#FOR TI:OMAP3/4/AM35xx

ZRELADDR=0x80008000#MMC device name

MMC=/dev/sdX  //X is the letter for your SD card
  • Run build script:.

sudo ./build_kernel.sh
  • In kernel configuration tool enable debug support for SPI drivers, save the configuration and continue building the kernel.

Build kernel

Prepare environment on your host linux machine

  • Put your SD card into card reader, then in linux-dev directory run install_image.sh script:

sudo ./tools/install_image.sh
  • Start your BBB with your microSD card installed and check if the new kernel has been deployed:

sudo uname -r

Deploying kernel to the existing system installation in SD card

Deploying kernel to the new system installation in microSD card

All above instructions are based on information from the following links:

  • Get your system image from Robert C. Nelson site (for example debian wheezy), unpack it and run setup_sdcard.sh script:

sudo wget http://rcn-ee.net/deb/rootfs/wheezy/debian-7.5-console-armhf-2014-05-06.tar.xz
sudo tar -xf debian-7.5-console-armhf-2014-05-06.tar.xz
cd debian-7.5-console-armhf-2014-05-06
sudo ./setup_sdcard.sh --mmc /dev/sdb --uboot bone
  • When installation process is finished, deploy new kernel like in previous section of this tutorial.

  • Start your BBB with your microSD card installed - the username and password can be found in user_password.list file in debian-7.1-console-armhf-2013-09-26 directory.
NOTE: Check for other linux distributions available on Robert C. Nelson site

Using MicroSD as an extra storage device

There are three ways to make your BBB boot from onboard eMMC memory:

  1. Use the image which can be directly flashed to the onboard eMMC memory while system is booted from your microSD card (you do it directly on your BBB).

  • You can get one from Robert C. Nelson site or ARMhf website :

sudo wget http://rcn-ee.net/deb/flasher/wheezy/BBB-eMMC-flasher-debian-7.5-2014-05-15-2gb.img.xz
  • Install it to the internal BBB’s eMMC memory:

sudo xz -cd BBB-eMMC-flasher-debian-7.5-2014-05-15-2gb.img.xz > /dev/mmcblk1
  • Power down your BBB, remove the microSD card and start your BBB again. It will now boot directly from onboard eMMC.

sudo shutdown -hP now
  1. Use the image which can flash the onboard eMMC from microSD card (you do it on your host linux machine).

  • The procedure is very similar to the previous one. The same image (with img.xz extension) can be installed on microSD card. Log in as root first, and check your microSD card device ID (sdX), i.e. with lsblk:

xz -cd BBB-eMMC-flasher-debian-7.5-2014-05-15-2gb.img.xz > /dev/sdX
  • When installation is finished, insert your microSD card into BBB, push the S2 switch down(the one next to microSD card slot), and power on the BBB. Hold the switch until the first of USER’S LED comes on. The flashing process is on and it will be finished when all of the USER’S will be on steady - it can take up to 45 minutes - depending on the size of the image.

  • When the process is finished turn off your BBB, remove the microSD card and start your BBB again. It will now boot directly from onboard eMMC.


  1. Use a copying script which will copy your microSD card to onboard eMMC memory (you do it directly on your BBB).

  • While running your BBB with system booted from microSD card, get the script from Robert C. Nelson git repository:

sudo git clone git://github.com/RobertCNelson/tools/
cd tools/scripts/
  • Run the copying script:

sudo ./beaglebone-black-copy-microSD-to-eMMC.sh
  • When the script is finished power down your BBB, remove the microSD card and start your BBB again. It will now boot directly from onboard eMMC.

sudo shutdown -hP now

  • On your Linux machine and card reader format microSD card and create one partition with FAT32 file system - you can use DiskUtility or Gparted software.

  • Mount your microSD device and using NANO editor create a new file on it - uEnv.txt - with the following text:

mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
  • Save the file, insert your microSD card into your BBB and turn it on. The system will boot from the internal eMMC and the microSD card will be recognized as /dev/mmcblk0 device. You can now mount it for example in /mnt directory:

sudo mount /dev/mmcblk0p1 /mnt

Flashing onboard eMMC form microSD card

  • http://www.armhf.com/index.php/boards/beaglebone-black/
  • http://elinux.org/BeagleBoardUbuntu#Flasher
  • http://circuitco.com/support/index.php?title=BeagleBoneBlack
  • http://avedo.net/653/flashing-ubuntu-13-04-or-debian-wheezy-to-the-beaglebone-black-emmc/
  • http://derekmolloy.ie/beaglebone/beaglebone-adding-usb-wi-fi-building-a-linux-kernel/
3 Comments

3DAudioSense was awarded in "zacznij.biz" competition for startup projects

20/5/2014

0 Comments

 
3DAudioSense was one of five awarded half-finalist projects in last month’s competition for startup projects – Zacznij.biz – read more.
Obraz
Zacznij.biz is a contest for teams aiming to present their early-stage projects in front of larger audience including potential investors. It is organized by Lewiatan Confederation and Lewiatan Business Angels.

The grand jury of the contest consisted of business experts, business angels and investors who are often making decisions on investments on potentially successful projects that they later support.
Out of 104 contestant projects, 3DAudioSense made it up to the half final - best 8, which entitled the team to present the project at a stand at Grand Finale Gala on 4th March 2014 in Warsaw.

The competition itself started in September 2013 and was divided into several stages which included both preparation/practice and evaluation sessions. The last evaluation was done based on presentation made in front of Grand Jury, which as a result made a decision to honor 3DAudioSesne with an award.

During the Grand Finale Gala the team had a chance to present project's details to business angels, investors and technical people. The project got very positive reception, which lead to further discussions about potential funding from VCs.

Many valuable guests gave their opinions and advices on how to improve the project from the business perspective. It has been a valuable lesson for the team that is even more certain about project's value from both technical and business standpoints.

0 Comments

3DAudioSense architecture view

5/4/2014

0 Comments

 
AudioSense is a Wireless Acoustic Sensor Network (WASN) which can be used for real time spatial audio recording. The main goal in this approach is object based audio representation, ie. signals which represent individual sound sources. Individual sound sources are extracted from a sound mixture using specific sound source separation algorithms, such as Independent Component Analysis. Sound object representation gives great flexibility in audio scene reconstruction in terms of loudspeakers and headphones  setups. One of the finest features introduced by this kind of representation is a possibility of interactive  manipulation of sound objects at the receiver/user side.

System requirements

  • The system is supposed to record spatial audio scene using wireless sensors placed in a given area with a spacing of minimum 1m
  • The system should work in real-time to allow live audio recording and streaming
  • Audio capturing should focus on spatial audio objects recording together with the information regarding the location of the objects in the scene,
  • The sound should be recorded with good quality (comparable to 64 kbps per channel in traditional systems with audio compression applied),
  • The system should be mobile and easy to configure, 
  • The system should work minimum 8 hours on batteries embedded in each device

AudioSense architecture

Obraz
The system consists of the WASN part and the sound processing part.

WASN is a heterogeneous network with two classes of devices:
  • Acoustic Sensors (AS) – this class of devices is responsible mainly for sound recording with microphones, A/D conversion, initial compression (e.g. using AAC) and transmission to the Sensor Nodes that are on a higher level in the logical hierarchy of  the network.
  • Sensor Nodes (SN) – this class of devices performs the same functions as above and additionally aggregates the audio streams from many Acoustic Sensors.

Aggregated streams are forwarded to the nearest Gateway which serves as an interface between the WASN and the 3D Audio Processing Unit.

3D Audio Processing Unit performs: 
  • Decoding 
  • Synchronization of the audio streams. 
  • Sound sources separation – object representation. 
  • 3D audio coding (e.g. MPEG-H 3D Audio).  Encoded audio is then transmitted over the Internet to the client side where sound rendering is performed.

Total bit rate of MPEG-H 3D Audio varies from 256 up to 1200 kbps for 22.2 channel material. Interesting feature provided by MPEG-H 3D Audio is the ability to decode and render spatial audio for different loudspeakers setups and headphones as it has been shown below
Obraz

Hardware implementation

Acoustic Sensor
  • Texas Instrument’s BeagleBone Black (BBB) – a low-power single board computer based on 1GHz ARM Cortex A8 CPU. The CPU can reach performance of 2.0 DMIPS/Hz (Dhrystone Million In-structions Per Second) and supports NEON instruction set which is very useful for performing of signal pro-cessing algorithms during audio coding. 
  • A/D conversion on acoustic sensor is performed by AudioCape – the Beaglebone’s extension board based on high quality A/D and D/A converters with sampling rate up to 96kHz and 32 bit resolution. 
  • Microphone array – developed as BBB extension, AudioSense board equipped with pair of Monacor MCE4000 microphones, low noise amplifiers and acoustic localization module. 
  • Bluetooth interface to communicate with the nearest Sensor Node.
Obraz
BeagleBone Black, AudioCape and Microphone array (BBB extension).
3D Audio Processing Unit
  • Based on Adapteva’s Parallela board
Obraz
This energy efficient single board computer is based on Xilinx Zynq 7000 SoC (system on chip) module supported with Epiphany III mulitcore accelerator. The Epiphany chip contains 16 high performance RISC CPU cores, where each of them can operate at 1GHz and 2 GFLOPS/sec. This high computing power computer consumes only up to 5 Watts which makes it very convenient to use in a WASN system like AudioSense.

0 Comments

The Birth of a new 3D audio related Blog

11/11/2013

0 Comments

 
Let’s imagine a situation where you can capture a three dimensional sound in an easy way - just using small audio sensors. Then, you will be able to reproduce immersive sound on any loadspeakers or headphones.

3D AudioSense is a research project which focuses on recording and delivery of 3D audio recorded using distributed wireless sensor network. It is mainly split between two activities: 
  • spatial audio processing and coding - we will focus on techniques such as sound localization, separation, sound object parametric representation and 3D audio compression
  • audio streaming over wireless sensor network - it is importatnt to understand the hardware and protocols limitations in the context of battery operated nodes.
Moreover, this blog will capture some thoughts, ideas and standards related to above topics.

3D audio is a really excited area for research as well as part of growing market for new products. Therefore, we are really happy to be able to make our contribution in this topic. Check back here often or subscribe to our feed (either twitter or RSS) to keep up to date with our work.


0 Comments

    Author

    3D AudioSense is a research project which focuses on the capturing of spatial audio scene using a distributed wireless sensor network.

    Project is funded by the National Centre for Research and Development (NCBiR), Poland, "Leader" programme.

    Archives

    March 2015
    February 2015
    July 2014
    May 2014
    April 2014
    November 2013

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • About
  • Press
  • Contact
  • Blog