Quantcast
Channel: Christoph Lauer – Christoph's Homepage
Viewing all 14 articles
Browse latest View live

Acoustic signal classification toolbox for arbirary industry class audio signals

$
0
0

Based on statistical machine learning methods from the artificial intelligence and the computational linguistics we developed a specific algorithm framework for so far unsolved problems in the industry. The toolbox adresses the broad spectrum of automatic classification and detection problems for arbitrary industry class audio signals, which differ much from speech signals where many other machine learning algorithms arise from. We get a brilliant recognition rate, not comparable with classical time and frequency domain based methods usually used in the acoustic signal processing. The application field spreads the full spectrum of typical as yet unsolved applications in the industry, for example: Acoustical quality control in mass production, mechanical harm control, ball-bearing blackout warning in offshore wind power stations, wastage detection in fire-brick fabrication, detection of transmission failures in gearboxes while production, motor observation, remote fallout detection of mechanical systems, fracture testing of cast iron parts, car waterpumps, even foul fruits can be detected, and many other applications more.

The technique and algorithmic system behind our improved signal processing has a blindingly yet not seen classification rate for the audio signal detection, classification and fabrication error detection, and spreads away the classical frequency domain based resonance analysis. Our Toolbox has a modular organization with a concatenative structure of three basic building blocks:

BBB1: The digital input signal can either be extracted from an acoustical camera with beamforming or from the classical air/structure-borne microphones. Impulse Response or resonance frequency related testing topics like in classical material testing can also be used where the acoustic signal came either from a classical mechanical energy impact, or from sweep sine based impulse response extraction with a dual piezo contact treatment system.

BBB2: The physical feature extraction transforms the digital audio signal in a representation which matches best the specific needs. This could either be a frequency band representation, a wavelet-tree based classification, or a super fine wigner-ville based time frequency analysis for the precise analyze of short time signals.

BBB3: The classification process uses a complex system of algorithms we developed for industry class applications. Our algorithms have nothing to do with classical phonem, word and grammar based speech recognition systems, and are an development process from over ten years experience in signal classification and machine learning algorithms. Our classification system is developed especially for signals which differ from speech signals and we can adopt it for almost every specific task.

We are a team of engineers with innovative solutions. Don‘t hesitate to contact us if you have an unsolved problem in quality control, Function testing, process control or any other acoustical signal classification related task. We have a fully functional testbed environment where fast results can be archived if test signals are available. We consult also clients which have a need for an embedded environments in software design and hardware development.

Unbenannt-2

KEYWORDS: Akustik, akustische Klangprüfung, Resonanzanalyse, Geräuschprüfung, Eigenfrequenzmessungen, Materialprüfung, akustische Prozesskontrolle, Signalerkennung, microphone array, beamformer, acoustic material testing, natural frequency measurement, noise evaluation, vibration measurement, classifyer, Qualitätskontrolle, Resonanzprüfung, Eigenfrequenz, Resonanzverfahren, Psychoakustik, Spracherkennung, Audio, Ordnungsanalyse, FFT, Fourieranalyse, akustische Qualitätssicherung, akustische Prüftechnik, acoustic quality control, acoustic testing technology.


Lab Instrumentation

$
0
0

Realizing more and more embbeded projects increases our Hardware measurement laboratory piece by peace. Our latest investigation is a digital spectrum analyzer.

Rigol DSA 815 TG – 1.5GHz Digital Spectrum Analyzer.
Atten ADS 1102 CAL – 100MHz Digital Storage Oscilloscope.
Fluke 8845A – 35ppm Precision Reference Digital Multimeter
Hameg 203 – Classical Oscilloscope
Linear and Switching Power Supplys.
FPGA Board, ARM Image-Processing demo board….
Precision Optical Microscope.
Air bubble etching Machine.
Self-made Multiple-Part-Component-Tester (link).
Multimerters, LCR-Meter, Clamp-on Ammeter, Function Generator, IR-Thermometer, Weller iron, precision measurement microphone, HF equipment, various µC Programmers, Mertex Analog MM, logic analyzer, Sensors, all possible of mechanical Tools, VSWR bridge, PC…
Tons of Adapters and Cables, special connectors and special tools….
The full range of active and passive parts…

scope

Fuzzy Classification

$
0
0

The classification of acoustic noise signals or signals from an arbitrary measurement chain is easy to handle with methods which has arisen from the language technology. This methods are mostly based on Hidden-Markov-Model (HMM) and Gaussian-Mixture-Model (GMM) based classification systems, with a special adopted front-end feature extraction, which match the required needs in therms of the signal statistics. As we mentioned here before, we have a running HMM/GMM demo classifier system. But the excellent classification quality of them is bought on coast of the performance of the underlaying algorithms, which is often not realizable in modern embedded systems. Other classification methods archive much better performance and allow nearly instantaneous results on programmable logic components with a reasonable amount of complexity. We focus in this presentation on the method of fuzzy clustering and compare them with other analysis techniques. The presentation is in german language.

View Fullscreen

DSP Book – Chapter 1

$
0
0

I decided to write a collection of useful analog and digital signal processing Algorithms, Implementations and the underlaying Theory. The name of the Book is „Contemporary Signal Processing Algorithms“. The first 72 pages can be downloaded below.

Note: The text of this early stage version is only a rough draft.

View Fullscreen

A generic java Array-Viewer tool

$
0
0

I didn’t found any adequate java array viewer for very large two dimensional arrays, so i decided to write one which accept any primitive (int, double, float...) and generic (derived from Object) form of arrays, and displays them in a scroll pane on the screen. The implementation is very simple. The view(array) function is overloaded and accepts any type of array given. It is also possible to set the title line of the table and see the corresponding selections. Example given:

import clauer.tools.*;

// generate the demonstration array
float[][] fa = new float[1000][2000];
for (int x = 0; x<1000; x++) {
  for (int y = 0; y<2000; y++){
    fa[x][y] = x*y;
  }
}

// the usage is very simple
ArrayViewer av = new ArrayViewer();
av.view(fa);

⇾  ArrayViewer is on GitHub

Realtime-Audio-DSP Tutorial with the ARM STM32F4-Discovery Board

$
0
0

The widespread ARM Cortex-M4 STM32F4-Discovery Board includes everything required for beginners and the experienced to start with the development of audio signal-processing algorithms. This board coasts 15€ and has a build-in digital omnidirectional microphone (MP45DT02, PDM output over I2S) and a headphone driver with included DAC (CS43L22, Class-D, PCM over I2C). This is an ideal constellation for the development of audio algorithms where the microphone signal can be passed trough the headphone with a digital signal processing in between, so we can hear the microphone signal or the processed signal with a connected headphone. This allows you to hear the effect of various DSP algorithms. These programs are useful for two reasons: (1) they allow you to quickly get the system doing something interesting, giving you confidence that it does work, and (2) they provide a template for creating programs of your own. Based on the official available demonstration projects available from ARM and STM, I’ve build a ready-to-use startup project with the MDK-ARM (Microcontroller Development Kit) IDE under Windows. The project has everything preconfigured for the implementation of your DSP code. For demonstration purposes a high pass filter can be enabled with the user button, which is indicated by the blue LED. The green LED indicates the running ring buffer, and the red LED signalizes the audio clipping. The binaries are also included into the project in case you want only try out the pass through and the filters.

1.) Download the Toolchain

The free version of the MDK-ARM is restricted to projects with a maximum binary size of 32kb, which is far enough for our purposes. The latest version can be grabbed here. Download and install the latest version. Install also the packages for the STM32F4-Discovery board.

2.) Download Audio-DSP Example

The source code for this tutorial is on Github –> https://github.com/clauer14/Audio-DSP. Please clone or Download the repository. git clone https://github.com/clauer14/Audio-DSP.git

3.) Build the Code

The project should compile out of the box (at least with MDK-ARM in version 5.14). Upload the binary to the the board (perhaps you must add the “STM32F4xx Flash” to the “Programming Algorithms” in the settings the of the Flash Tools).

4.) How the Code Works

Beginning from the main function the WavePlayBack() function initializes the microphone handler and start the sampling. The PDM filter library converts the PDM samples in the AUDIO_REC_SPI_IRQHANDLER() function to raw 16bit PCM samples, which are periodically stored in a ring buffer with help of an endless loop in the WavePlayBack() function. The fill_buffer function calls the dsp() function, where we place our DSP code. Following the code from the dsp.c file:

// local includes
#include "dsp.h"

// the user button switch
extern volatile int user_mode;

// our core dsp function
void dsp(int16_t* buffer, int length)
{
  // initialize some values
  static float previous;
  int i;
	
  // if switched on, apply the filter
  if (user_mode & 1)
  {		
    // perform an simple first order high pass with 12dB/octave
    for (i=0; i<length; i++)
    {
      buffer[i] = (int16_t)( (float)buffer[i] -(float)previous * 0.75f );
      previous = buffer[i];
    }
  }
}

This simple function implements a first-order high pass filter which can be switched on with the user button. You can place you DSP code in this function. You can also use the ARM-DSP library from the CMSIS (Cortex Microcontroller Software Interface Standard) with #include "arm_math.h". At this point we have an excellent starting point for our own DSP audio algorithms…

5.) Make usage of the ARM-CMSIS Processor DSP Libraries

Now that we have a running DSP loop which feeds us with 512 samples long 16bit integer buffers, we can take advantage from the 168MHz ARM Cortex-M4 Processor and the CMSIS library which ARM supports. First we instantiate the CMSIS library header files and define some basic values. Because we have no operating system, we allocate the needed arrays on the heap to prevent a stack overflows and distortions on the window borders from the fir filter (the values from the previous frame are stored in the filter object). We take some filter coefficients from Matlab. The image below shows the impulse response and the transfer function of the filters.
a
We can now apply the CMSIS FIR filter function to the buffer. The user button can be used to switch between the high pass and low pass filters. Following our code for the dsp.c:

#include <dsp.h>

// arm cmsis library includes
#define ARM_MATH_CM4
#include "stm32f4xx.h"
#include <arm_math.h>

// arm c library includes
#include <stdbool.h>

// the user button switch
extern volatile int user_mode;
int old_user_mode;

#define NUM_FIR_TAPS 56
#define BLOCKSIZE    512

// allocate the buffer signals and the filter coefficients on the heap
arm_fir_instance_q15 FIR;
q15_t outSignal[BLOCKSIZE];
q15_t fir_coeffs_lp[NUM_FIR_TAPS] = { -217,   40,  120,  237,  366,  475,  527,  490,  346,
                                       100, -217, -548, -818, -947, -864, -522,   86,  922,
                                      1904, 2918, 3835, 4529, 4903, 4903, 4529, 3835, 2918,
                                      1904,  922,   86, -522, -864, -947, -818, -548, -217,
                                       100,  346,  490,  527,  475,  366,  237,  120,   40,
                                      -217,    0,    0,    0,    0,    0,    0,    0,    0, 
                                         0,    0};  // low pass at 1KHz with 40dB at 1.5KHz for SR=16KHz
q15_t fir_coeffs_hp[NUM_FIR_TAPS] = { -654,  483,  393,  321,  222,   76, -108, -299, -447,
                                      -501, -422, -200,  136,  520,  855, 1032,  953,  558,
                                      -160,-1148,-2290,-3432,-4406,-5060,27477,-5060,-4406,
                                     -3432,-2290,-1148, -160,  558,  953, 1032,  855,  520,
                                       136, -200, -422, -501, -447, -299, -108,   76,  222,
                                       321,  393,  483, -654,    0,    0,    0,    0,    0,
                                         0,    0,}; // high pass at 1.5KHz with 40dB at 1KHz for SR=16KHz
q15_t fir_state[NUM_FIR_TAPS + BLOCKSIZE];
bool firstStart = false;

// the core dsp function
void dsp(int16_t* buffer, int length)
{
  // only enable the filter if the user button is pressed
  if (user_mode & 1)
  {
    // we initiate the filter only if needed to prevent clitches at the beginning of new buffers
    if (firstStart == false || old_user_mode != user_mode)
    {
      initFilter();
      old_user_mode = user_mode;
      firstStart = true;
    }
	
    // process with FIR
    arm_fir_fast_q15(&FIR, buffer, outSignal, BLOCKSIZE);

    // copy the result
    arm_copy_q15(outSignal, buffer, length);
  }
}

// we initialize and switch the filter here
void initFilter()
{
  // apply the low pass filter
  if (user_mode & 1)
    arm_fir_init_q15(&FIR, NUM_FIR_TAPS, fir_coeffs_lp, fir_state, BLOCKSIZE);
  // or applay the high pass filter depending on the user button switch mode
  if (user_mode & 2)
    arm_fir_init_q15(&FIR, NUM_FIR_TAPS, fir_coeffs_hp, fir_state, BLOCKSIZE);
}

6.) Demonstration

7.) Datasheets and Annex

A description how to control the CS43L22 audio codec can be found here.
A description how to control the MP45DT02 microphone can be found here.

Datasheets:

The Board STM32F4 Discovery
ARM Cortex-M4 Processor STM32F407VG
Audio Codec CS43L22
Microphone MP45DT02
Acceleration Sensor LIS3dSH
Board Schematics

N.B.: while I recorded the video above, I spot two 20 watt halogen lamps on the board and get sound dropouts if the shadow of my finger reaches the U8 (see schematics) ESD filter which has a blanc silicone surface. 🙂 This hardware bug is reproducible and seems to be the photoelectric effect.

STM32F4-DISCOVERY_BoardContent

A handy network-protokoll-analyzer for Linux

$
0
0

This is a compact network package analyzer for the linux console. The default device is eth0. The protocol and the port can be filtered. The port filter restrict the output to TCP and UDP packages, other packages are ignored. The programm should be started as root with sudo (have a look in the Makefile). With the “-l” option the program can be started in the line-wise mode where every package is presented as one line in the terminal. Packet-Analyzer is on Github –> https://github.com/clauer14/packet-analyzer. Feel free to implement your own packet filter.

usage:

packet-analyzer [-l] [-d device] [-f protokol filter (t|u)] [-p port filter]

example:

packet-analyzer
packet-analyzer -l
packet-analyzer -p 1234
packet-analyzer -d eth1 -f u -p 1234

A packet looks like:

***********************TCP Packet************************* Ethernet Header |-Destination Address : 00-50-56-F8-A9-80 |-Source Address : 00-0C-29-27-3D-B6 |-Protocol : 8 IP Header |-IP Version : 4 |-IP Header Length : 5 DWORDS or 20 Bytes |-Type Of Service : 0 |-IP Total Length : 40 Bytes(Size of Packet) |-Identification : 12491 |-TTL : 64 |-Protocol : 6 |-Checksum : 41710 |-Source IP : 192.168.204.148 |-Destination IP : 173.194.44.23 TCP Header |-Source Port : 56266 |-Destination Port : 443 |-Sequence Number : 207075502 |-Acknowledge Number : 3540126002 |-Header Length : 5 DWORDS or 20 BYTES |-Urgent Flag : 0 |-Acknowledgement Flag : 1 |-Push Flag : 0 |-Reset Flag : 0 |-Synchronise Flag : 0 |-Finish Flag : 0 |-Window : 40440 |-Checksum : 11269 |-Urgent Pointer : 0 IP Header 00 50 56 F8 A9 80 00 0C 29 27 3D B6 08 00 45 00 .PV..�..)'=...E. 00 28 30 CB .(0. TCP Header 40 00 40 06 A2 EE C0 A8 CC 94 AD C2 2C 17 DB CA @.@.........,... 01 BB 0C 57 ...W Data Payload ########################################################### Statistics: TCP : 414 UDP : 144 ICMP : 0 IGMP : 0 Others : 2 Total : 560 Bytes(total) : 141914 Bytes(avg) : 253

In the line-wise mode the output looks like:

UDP - User Datagram -- 90 bytes from --> 127.0.1.1 to --> 127.0.1.1 UDP - User Datagram -- 102 bytes from --> 192.168.204.2 to --> 192.168.204.2 UDP - User Datagram -- 102 bytes from --> 127.0.1.1 to --> 127.0.1.1 UDP - User Datagram -- 102 bytes from --> 127.0.1.1 to --> 127.0.1.1 TCP - Transmission Control -- 74 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 293 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 60 bytes from --> 216.58.211.3 to --> 216.58.211.3 TCP - Transmission Control -- 356 bytes from --> 216.58.211.3 to --> 216.58.211.3 PUP - PUP -- 42 bytes from --> 61.182.192.168 to --> 216.58.211.3 VMTP - VMTP -- 60 bytes from --> 247.71.192.168 --> 192.168.204.149 TCP - Transmission Control -- 54 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 96 bytes from --> 216.58.211.3 to --> 216.58.211.3 TCP - Transmission Control -- 54 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 211 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 92 bytes from --> 192.168.204.149 to --> 192.168.204.149 TCP - Transmission Control -- 60 bytes from --> 216.58.211.3 to --> 216.58.211.3

A CAN Driver Library for the ARM Cortex-M4 based STM32F4

$
0
0

The STM32F4 ARM Cortex-M4 based µC Series offers two CAN controllers which are compliant with the 2.0A and B (active) specifications with a bitrate up to 1 Mbit/s. They can receive and transmit standard frames with 11-bit identifiers as well as extended frames with 29-bit identifiers. Each CAN has three transmit mailboxes, two receive FIFOS with 3 stages and 28 shared scalable filter banks (all of them can be used even if one CAN is used). 256 bytes of SRAM are allocated for each CAN.

The STM32F4-Discovery board has a STM32F407VG µC with two CAN controllers, but no CAN transceiver which normally performs the conversion between the single-ended CAN controller CAN Tx and CAN Rx signals to the bi-directional differential pair of the CAN bus called CAN Hi and CAN Lo (High and Low). So we cannot connect tour board directly to an existing CAN bus, but we can interconnect both controllers with help of two 1N4148 diodes and a resistor to make some experiments with the CAN bus and transfer values from one to another. (If we want to connect our board to a real CAN bus we must for example the TI Waveshare SN65HVD230 CAN Transceivers). For our STM32F4-Discovery board we need to make modifications like described in the schematics image below.

Schematic

The source code for the project is available on Github: https://github.com/clauer14/STM32F4-CAN-Driver

The code is build with the Keil MDK-ARM armcc compiler. The main function in the CanDemo.c file shows the usage of the can.c diver file. We transfer one incrementing byte from CAN controller 2 to controller 1 and print the received value on the serial port and the LED. After we compile and upload the binary, we can see the CAN frames on the oscilloscope (connected between the resistor and ground).

In the debug mode with the Keil MDK-ARM we have insight into various processor internals from the Cortex-M4 in realtime. The code implements also an serial connection over SWV (serial wire viewer which is part of the serial debug interface (SDI)) where we can write with printf to the serial console with.

The Logic analyzer for the can bus.
la

The output of the serial console.
sc
The “trace exceptions” view shows the interrupts from the can controller.
te
The send and received values in the realtime view.
w
The signal characteristics of a CAN frame.
vvvvvvvvvv

A Realtime Audio Demonstration of an Adaptive Filter with the ARM Cortex-M4 based STM32F4-Dicovery Board

$
0
0

Recently I had finished the audio passthrough from the microphone to the headphone with the STM32F4-Discovery board (see here) so I have now a wonderful playground for the testing of digital audio algorithms. After a deeper look into the ARM CMSIS libraries I implemented a simple adaptive filter which can be trained with the user button. The filter is adopted in the silent phase of the surrounding noise in the training phase and later applied to the signal. The resulting error signal is the trained noise removed from the input signal. Nothing special, but it works fine. Attached the code for the dsp.c function for the previous mentioned testbed project.

#include <dsp.h>
// arm cmsis library includes
#define ARM_MATH_CM4
#include "stm32f4xx.h"
#include <arm_math.h>
// arm c library includes
#include <stdbool.h>
// the user button switch
extern volatile int user_mode;
#define NUM_FIR_TAPS 128
#define BLOCKSIZE    512
#define MU           1
// allocate the buffer signals and the filter coefficients on the heap
arm_lms_instance_q15 LMS; 
q15_t outSignal[BLOCKSIZE];
q15_t refSignal[BLOCKSIZE];
q15_t errSignal[BLOCKSIZE];
q15_t fir_coeffs[NUM_FIR_TAPS];
q15_t state[NUM_FIR_TAPS + BLOCKSIZE];
bool firstStart = false;
// the core dsp function
void dsp(int16_t* buffer, int length)
{
  // we came only here once
  if (firstStart == false)
  {
    // set the filter to an ideal filter
    arm_fill_q15(0, fir_coeffs, NUM_FIR_TAPS);
    fir_coeffs[0] = 32767;
    // set the reference signal to zero
    arm_fill_q15(0, refSignal, BLOCKSIZE);
    // initialize the adaptive filter 
    arm_lms_init_q15(&LMS, NUM_FIR_TAPS, fir_coeffs, state, MU, BLOCKSIZE, 0);
    // initialize the FIR filter 
    firstStart = true;
  }
	
  // store the silent reference signal in the train mode and switch back to normal mode
  if (user_mode & 1)
  {
    // store the silent noise in the reference signal
    arm_copy_q15(buffer, refSignal, BLOCKSIZE);
    // set the train mode back
    user_mode ++;
  }
  else 
    // aplly the adaptive filter. It adapts the output to the reference signal
    // so the error signal results in the "denoised" signal.
    arm_lms_q15(&LMS, buffer, refSignal, outSignal, errSignal, BLOCKSIZE);
  // copy the error signal
  arm_copy_q15(errSignal, buffer, BLOCKSIZE);
}

An floating point absolute value implementation: fabs(x)

$
0
0

After i analyze some performance critical algorithms on a Digital-Signal-Processor i found a simple implementation of a floating point absolute value which runs faster than some macro implemenations.

double x;
*(((int *) &x) + 1) &= 0x7fffffff;

Depending on the number of calls the function can/should be defined inline.

Framebuffer input devices in Linux

$
0
0

Camera devices under Linux are often mapped from the device-tree via the DMA to the kernel framebuffer device. A detailed overview can be found here: www.kernel.org/framebuffers. When the sceen is matched to /dev/fb0 and the input device to /dev/fb1 we could simple copy the camera image on the screen with:

cat /dev/fb1 > /dev/fb0

The following example shows a recource economized access of the framebuffer input device via memory maping into an Qt GUI classes…

// stdandard c includes
#include <fcntl.h>
#include <unistd.h>

// standard c++ includes
#include <iostream>

// linux includes
#include <sys/ioctl.h>
#include <sys/mman.h>

// qt includes
#include <QtGui>

int main ()
{
  int fx;                     // the optical cam width
  int fy;                     // the optical cam heiht
  int fbdev;                  // file descriptor fro the framebuffer device
  unsigned char* fb;          // optical cam framebufer pointer

  // first open the framebuffer device file
  fbdev = open("/dev/fb1", O_RDONLY);
  if (fbdev == -1) 
  {
    std::cout << "ERROR: cannot open framebuffer device file." << std::endl;
    return -1;
  }

  // get the framebuffer screen metadata
  struct fb_var_screeninfo screen_info;
  if (ioctl(fbdev, FBIOGET_VSCREENINFO, &amp;screen_info)) 
  {
    std::cout << "ERROR: get screen info error." << std::endl;
    return -2;
  }
  fx = screen_info.xres;
  fy = screen_info.yres;
  int bp = screen_info.bits_per_pixel/8; // byte per pixel
  std::cout << "INFO: Camera Reolution = (" << fx << ", " << fy << ", " << bp << "Byte" << ")" << std::endl;

  // generate the memory map
  fb = (unsigned char*)mmap(0,fx*fy*bp, PROT_READ, MAP_SHARED, fbdev, 0);
  if (fb == MAP_FAILED)
  {
    std::cout << "ERROR: mmap error." << std::endl;
    return -3;
  }

  // instanciate the Qt GUI element
  QImage fbImage(fb ,fx, fy, QImage::Format_RGB32);
  ...
  // we could now resize the image and paint it on the screen...
  ...

  return 0;
}

Acoustic-Cancelation using advanced beamforming technique

$
0
0

The Acoustic Camera:

I my previous post (Acoustic Camera) i have shown how to apply the delay and sum beamformer for the extraction of acoustic images for data collected from a microphone array. To do so we “beam” (more details about the time domain based delay and sum beamforming can be found here) the microphone signals pointwise step by step over a virtual projection plane in the room unti we have an image.

The Acoustic-Cancelation:

Another commonly used beamforming technique is the so called source isoaltion where the sound output from a specific location can be extracted. Would it not be nice to have the opposite signal of this, meaning that we can hear all the secondary sound around the environment of the source but not the source itself. A kind of black hole for the sound source. This is the so called acoustic cancelated signal (at least I call it so here :-). It is like the source isolation signal from a specific location, but instead of extracting the signal from this specific location it erases the signal from this location. Think about a possible application for example about the injection pump in a car motor if you want only hear the sound from the environment of the pump but not the pump itself.

aa

The Test Data and the Algorithm:

According to there own decalaration the LOUD (Large acOUstic Data Array Project, see the image above) is the largest microphone array in the world. Fortunately they have some sample data available so we have something to play with it. The microphone array has a size of 60 x 17 microphones over 1.77 x 0.48 meter. For our test case the test Signal is located at point (1.4351 -0.4572 -0.3048) meter in the front of the microphone Array. All experiments where made with MatLab/Octave. The acoustic cancelation was was implemented in the time domain…

THE SUMM OVER ALL CHANNELS: Frist the sum signal over all the 1020 microphone Array channels without any beamforming:

THE SOURCE ISOLATION: The next Signal is the source isolated signal from the source position:

THE ACOUSTIC CANCELATED SIGNAL: And finally the acoustic erased Signal from all channels without the sorce Signal. We here only the secondary sound, in this case mainly the echo from the walls:

1

2

3

Near-Field Acoustical Holography (NAH)

$
0
0

loud-hardware1The near-field acoustical holography results unlike the far-field approach of the beamforming algorithm in sharp images at low frequencies with the limitation of nearby signals. The border between near-field and far-field is continuous and lays about r ≈ λ, which is 3.43m for 100Hz, 34.3cm for 1kHz and 3.43cm for 10kHz. We implemented a NAH-Algorithm and used the LOUD microphone array corpus to extract the sound pressure level (SPL) hologram in the half space between the source (zs) and the array (zh). The resulting NAH video has a frame rate of 16000 fps! The 2.0 second input Signal (32000 frames) is stretched into a 22:13 Minute NAH timelapse video with a frame rate of 24fps. The dimension of the hologram corresponds directely the dimension of the 1.80x51cm array (60×17 microphones). In our test case the signal source is located 70cm in front the Array in the middle at the bottom, so the resulting image is not as sharp as it could be if the source was placed directly 10cm in front of the array (for example in front of a engine bonnet, guitar, car door, ventilator…). Play the video at 11:18 when the sweep sound starts. Details about the implementation can be found here: Fourier Acoustic. Below you find also one channel of the 2.0 second input Signal and the room hologram images for the frequencies 300Hz, 1KHz and 5kHz.

 
One channel of the 2 second input signal 5cm in front of the array:

 
Timelapse NAH video from the 2 second sound signal 5cm in the front of the array:

 
Following a hologram image from the room between the source and the array (0…70cm) for a fixed point in time at 1kHz:
1001
1002
1003

 
5kHz
500150025003

 
300Hz
301302303

 
Other Plots
unbenannt1
untitled1
unbenannt

ZynqBerry Linux setup walkthrough

$
0
0

Recently I had finished the Linux setup with the ZynqBerry (see here) so I have now a wonderful playground for the testing signal processing on the FPGA. The ZynqBerry has the same form factor as the Raspberry Pi and the same peripheral connectors. The ZYNQ-SoC combines a FPGA device with an ARM processor, a wonderful combination! The ARM is called processing system (PS), the FPGA programmable logic (PL). The toolchain for the PL development is Vivado by Xilinix. The build tools for the PS are Xilinx-SDK under Windows and PetaLinux under Linux which can be installed in a virtual machine. Please note that this tutorial, the scripts and the listed commands are only valid for the tools at the time this tutorial was written. All tools to build the System are free available. Linux experience is needed.
xilinx-zynq-7000-large1

0.) Introduction:

In order to build the system the following files are required.
1.) FSBL.elf –> the first stage boot loader.
2.) zsys_wrapper.bit –> FPGA bitstream file.
3.) u-boot.elf –> universal boot loader which loads the kernel from the SD-Card.
4.) image.ub –> Kernel Image.
5.) BOOT.bin –> contains the FSBL.elf, u-boot.elf and the zsys_wrapper.bit. Uploaded to the Flash.
6.) debian.img –> Linux Image.
It is not necessary to build all from scratch, a pre configured project for a device with video, sound an camera is available. unbenannt

1.) Toolchain installation:

1.1) Vivado:

The Windows HDL tool Vivado can be downloaded here: Vivado Design Suite (the free ISE-WebPack Edition can be used).
unbenannt

1.2) Xilinx Software Development Kit (XSDK):

For embedded applications XSDK can be downloaded here: Xilinx Software Development Kit (only needed to build the FSBL).

1.3) PetaLinux:

PetaLinux is the Xilinx embedded linux distribution for the ZYNQ. It contains the Kernel, u-boot, rootfs, applications…

1.3.1) Install Ubuntu Linux in the VM:

Windows host, Linux guest. Install virtual box: Virtual Box with the pre configured Ubuntu 16.04 from www.osboxes.org –> OSBoxes (all cores, 4GB RAM). Password for user osboxes is “osboxes.org”. Change the sudoers file (see here, add osboxes "ALL=NOPASSWD: ALL" to the /etc/sudoers file). Enable file exchange with “shared folders” in virtual box to Windows (enable “Auto-Mount” and “Make-Permanent”). In order to access the shared folders from Linux add the user osboxes to the group vboxsf in Ubuntu:
> groupadd vboxsf

1.3.2) Install PetaLinux:

Install the required Ubuntu packages (Xilinx UserGuide UG1144):
> sudo apt-get install tofrodos iproute2 gawk gcc git make net-tools libncurses5-dev tftp zlib1g-dev libssl-dev flex bison libselinux1 lib32z1 lib32ncurses5
Download PetaLinux –> PetaLinux Download, install with:
> ./petalinux-v2016.2-final-installer.run /opt/pkg

2.) Download the Prebuild Project:

Download the pre configured project from Trenz-Elektronik –> Reference Design (download build package, not the noprebuild). Vivado has a limitation of 256 characters for file names –> don’t unpack the project folder to deep into the file system. The prebuild folder contains the binaries, the os folder contains the PetaLinux project files. Included are also Windows scripts to open the preconfigured Vivado and XSDK projects.

3.) FPGA:

Set the correct paths in the design_basic_settings.cmd file and open the script Vivado_create_project_guimode.cmd. Did not change anything here and “Generate Bitstream”, this can take up to 30 minutes! Export the Hardware with the menu File->Export->Export Hardware to the os/petalinux/Subsystems/linux/hw-description folder.

5.) XSDK, FSBL:

Open sdk_create_prebuilt_project_guimode.cmd script in the project root folder (XSDK with the pre configured hardware platform specification). FSBL can be build with:
–> Menu File->New->Application Project.
–> Project Name: FSBL, do not Change anything here, press Next
–> Select Zynq FSBL (te modified app…)
–> Open FSBL->src->fsbl_hooks.c and make sure #define DIRECT_CAMERA_VIEW is disabled (no direct HW copy of the camera stream to HDMI).
–> Save modifications, FSBl will be automatically rebuild.
–> Select the FSBL.elf file (FSBL->Binaries) with the mouse and copy the file to the prebuild folder.

6.) Petalinux Subsystem and the Kernel:

Build the PetaLinux system in Ubuntu:
> mkdir ~/Development
> cd ~/Development
> cp -r /media/sf_PATH_TO_PROJECT_FOLDER.../os/petalinux/ . –> from Windows shared folder….
> cd petalinux
> source /opt/pkg/petalinux-v2016.2-final/settings.sh –> initialize PetaLinux
> export CROSS_COMPILE=arm-xilinx-linux-gnueabi-
> export ARCH=arm
> petalinux-config --get-hw-description=./hw-description/
> vi subsystems/linux/config
Change here:
SUBSYSTEM_MEMORY_PS7_DDR_0_BANKLESS_SIZE [=0x1F700000]
SUBSYSTEM_ROOTFS_SD [=y]
> vi subsystems/linux/configs/device-tree/system-top.dts –> only look, not Change anything
> petalinux-config boot from SD-card: –>Image Packaging…–>Root filesystem…–>SD-Card
> petalinux-config -c kernel –> not Change anything
> petalinux-config -c rootfs –> root file system is not build because direct boot from SD –> disable anything in Libs, Apps and Modules.
> petalinux-build –> can take some time
> cp -t /media/sf_PATH_TO_PROJECT_FOLDER../prebuild images/linux/zsys_wrapper.bit images/linux/u-boot.elf images/linux/image.ub –> copy build results to Windows

7.) The Debian Linux Image:

The script mkdebian.sh builds a Debian (Jessie, ARM, armhf) distribution image. Two Ubuntu packages must be installed:
> apt-get install debootstrap qemu
Download this mkdebian.sh script and copy into ~/Development/petalinux folder in Ubuntu.
> sudo ./mkdebian.sh
The script generates the te0726-debian.img linux image file. Copy the image to Windows.
> cp te0726-debian.img /media/sf_PATH_TO_PROJECT_FOLDER../prebuild/

8.) Install everything on the ZynqBerry:

8.1) Write the Linux Image to the SD-Card:

Use Win32DiskImageMaker to write the te0726.img to the SD-Card. After the image has been written a partition is visible in Windows.

8.2) Copy the Kernel to the SD-Card:

Copy the image.ub from the prebuild folder to the first partition in Windows.
Copy the /misc/img/prebuild/u-boot.rgba file also to the first partition in Windows.

8.3) Create the BOOT.bin image:

Use the XSDK menu Xilinx-Tools->Create-Boot-Image to create a boot image. Add the FSBL.el, the zsys_wrapper.bit and u-boot.elf files in this order. Save the BIN file to prebuild folder.createbootimage

8.4) Flash the BOOT.bin file:

Connect the ZynqBerry with micro USB cable to the PC. Use the XSDK menu Xilinx-Tools->Program-Flash and program the BOOT.bin file to the ZynqBerry (flash type “qspi_single”).

9.) Boot Linux:

Connect the ZynqBerry with micro USB cable to the PC. Connect mouse and keyboard and HDMI monitor. Use putty and listen to the serial console (COM4). Boot up the device. Login with user:root and the password:root. Start the X-Server with startx. This could be automatically done with an entry in the /etc/rc.local.
> echo 'startx &' >> /etc/rc.local
Open terminal on the ZynqBerry and update the system:
> apt-get update
> apt-get upgrade

10.) (Optional) Hello Qt World:

Install Qt-Creator:
> apt-get install qtcreator
Open qt-creator and create a new “Qt Widgets Application” Project, select QMainWindow as base class. Make sure your mainwindow.cpp look like:

#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "qlabel.h"

MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent),
  ui(new Ui::MainWindow)
{
    ui->setupUi(this);
    QLabel* l = new QLabel("Hello World", this->centralWidget());
}

MainWindow::~MainWindow()
{
    delete ui;
}

Run the program…
hw

11.) (Optional) Additional Hardware:

10.1) Cable Network:

The LAN9514 ethernet controller is supported in Debian.

11.1) USB Sticks:

> mount /dev/sda1 /media/UsbStick

11.2) The Framebuffer Camera:

The ZynqBerry has the Raspberry Pi CSI = Camera Serial Interface. For the Raspberry Pi camera in Version 1.3 the I2C initialization code is available. It can be found in the PetaLinux folder under ../petalinux/components/apps/rpi-camera. Copy the files to the ZynqBerry and compile with:
> gcc rpi-camera.c -o rpi-camera
Do the same for the reg program in ../petalinux/components/apps/reg
The camera can now be initialized with:
> rpi-camera /dev/i2c-5
> reg 0x43C10040 1
After the initialization the camera is available as framebuffer device /dev/fb1. Test the camera by copying the camera image directly to the Screen:
> cat /dev/fb1 > /dev/fb0
Record a camera stream with:
> ffmpeg -y fbdev -i /dev/fb1 -framerate 24 -s 320x240 test.mpg
Look here if you want to use the framebuffer camera image in Qt –> Qt Framebuffer Camera

11.3) The Sound Driver:

The kernel module for the alsa sound driver must be compiled with PetaLinux in Ubuntu:
> petalinux-config –> set Image-Packaging…->Root-filesystem->INITRAMFS
> petalinux-config -c rootfs –> make sure Modules->te-audio-codec is selected
The compiled kernel module ./build/Linux/rootfs/modules/te-audio-codec/te-audio-codec.ko can now be copied to the ZYNQ at /lib/modules/4.0.0-xilinx/extra/te-audio-codec.ko. Add the following line to the /etc/tc.local:
> insmod /lib/modules/4.0.0-xilinx/extra/te-audio-codec.ko
> echo 'insmod /lib/modules/4.0.0-xilinx/extra/te-audio-codec.ko' >> /etc/rc.local
Reboot and test the alsa sound driver while playing an audio file with VLC.

11.4) Change the Display Resolution:

The resolution is fixed to 1280×720@60Hz, and can be changed with:
a.) In the Vivado block design open the video-out IP and open the video timing generator. Change the settings for the resolution and note the frame-sizes. The values in the clocking wizard IP must also be set. The base clock frequency is horiz_frame_size x vert_frame_size x FPS. CLK2 is the double base frequency, and CLK3 the tenfold base frequency.
b.) Open fsbl_hooks.c in XSDK and Change the values for the VDMA.
c.) Change the resolution in the PetaLinux device-tree with vi subsystems/linux/configs/device-tree/system-top.dts.
d.) Rebuild everything.

12.) Troubleshooting:

12.1) Reset ZYNQ via the XMD console:

In case the ZYNQ does not answer while programming or the Flash is damaged the ZYNQ can be reseted with the XMD console: XSDK menu Xilinx-Tools->XMD-Console:
XMD% connect arm hw
XMD% rst -debug sys
XMD% targets
XMD% disconect 64

Viewing all 14 articles
Browse latest View live